2016-12-02 15:28:54,188 INFO [main] hbase.HBaseTestingUtility(516): Created new mini-cluster data directory: /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05, deleteOnExit=true 2016-12-02 15:28:54,935 INFO [main] zookeeper.MiniZooKeeperCluster(276): Started MiniZooKeeperCluster and ran successful 'stat' on client port=60648 2016-12-02 15:28:54,971 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=cluster1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:28:54,998 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): cluster10x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:28:54,999 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): cluster1-0x158c1de825b0000 connected 2016-12-02 15:28:55,013 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=cluster2 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:28:55,015 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): cluster20x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:28:55,015 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): cluster2-0x158c1de825b0001 connected 2016-12-02 15:28:55,048 WARN [main] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2016-12-02 15:28:55,238 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x77888435 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:28:55,243 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x778884350x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:28:55,243 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x77888435-0x158c1de825b0002 connected 2016-12-02 15:28:55,243 INFO [main] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:28:55,243 DEBUG [main] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:28:55,296 DEBUG [main] util.ClassSize(230): Using Unsafe to estimate memory layout 2016-12-02 15:28:55,302 DEBUG [main] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c6908d7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:28:55,326 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=ReplicationAdmin connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:28:55,330 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:28:55,330 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): ReplicationAdmin-0x158c1de825b0003 connected 2016-12-02 15:28:55,500 DEBUG [main] zookeeper.RecoverableZooKeeper(584): Node /1/replication/peers already exists 2016-12-02 15:28:55,519 INFO [main] hbase.HBaseTestingUtility(1033): Starting up minicluster with 1 master(s) and 10 regionserver(s) and 10 datanode(s) 2016-12-02 15:28:55,519 INFO [main] hbase.HBaseTestingUtility(763): Setting test.cache.data to /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/cache_data in system properties and HBase conf 2016-12-02 15:28:55,519 INFO [main] hbase.HBaseTestingUtility(763): Setting hadoop.tmp.dir to /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_tmp in system properties and HBase conf 2016-12-02 15:28:55,520 INFO [main] hbase.HBaseTestingUtility(763): Setting hadoop.log.dir to /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_logs in system properties and HBase conf 2016-12-02 15:28:55,520 INFO [main] hbase.HBaseTestingUtility(763): Setting mapreduce.cluster.local.dir to /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/mapred_local in system properties and HBase conf 2016-12-02 15:28:55,520 INFO [main] hbase.HBaseTestingUtility(763): Setting mapreduce.cluster.temp.dir to /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/mapred_temp in system properties and HBase conf 2016-12-02 15:28:55,520 INFO [main] hbase.HBaseTestingUtility(754): read short circuit is OFF 2016-12-02 15:28:55,523 DEBUG [main] fs.HFileSystem(244): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2016-12-02 15:28:56,180 WARN [main] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2016-12-02 15:28:56,260 INFO [main] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-12-02 15:28:56,295 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:56,312 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52401_hdfs____.xrgela/webapp 2016-12-02 15:28:56,409 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52401 2016-12-02 15:28:56,807 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:56,810 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52404_datanode____rx7obm/webapp 2016-12-02 15:28:56,903 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52404 2016-12-02 15:28:57,040 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:57,043 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52409_datanode____3nsg9z/webapp 2016-12-02 15:28:57,112 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52409 2016-12-02 15:28:57,196 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:57,200 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52413_datanode____oewg00/webapp 2016-12-02 15:28:57,299 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52413 2016-12-02 15:28:57,511 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:57,515 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52417_datanode____505v5w/webapp 2016-12-02 15:28:57,632 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52417 2016-12-02 15:28:57,667 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a node DatanodeRegistration(127.0.0.1:52403, datanodeUuid=605bc8ad-63d6-400c-9e74-24ec0df4749d, infoPort=52405, infoSecurePort=0, ipcPort=52406, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:57,667 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3 node DatanodeRegistration(127.0.0.1:52412, datanodeUuid=6e0c0a53-46e3-4f17-80e7-c5fa4983be51, infoPort=52414, infoSecurePort=0, ipcPort=52415, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2016-12-02 15:28:57,667 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-378ff72f-b0d6-4d05-815b-ae7795fe2171 node DatanodeRegistration(127.0.0.1:52407, datanodeUuid=32930b8d-27fc-4d06-8858-24b922636671, infoPort=52410, infoSecurePort=0, ipcPort=52411, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:57,668 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-09afa37d-7680-43c2-9a55-48fdc90bdca3 node DatanodeRegistration(127.0.0.1:52412, datanodeUuid=6e0c0a53-46e3-4f17-80e7-c5fa4983be51, infoPort=52414, infoSecurePort=0, ipcPort=52415, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:57,668 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-228e15b4-22f0-4d12-a083-5b9d180b1d06 node DatanodeRegistration(127.0.0.1:52403, datanodeUuid=605bc8ad-63d6-400c-9e74-24ec0df4749d, infoPort=52405, infoSecurePort=0, ipcPort=52406, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:57,668 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-370520ed-6fc4-4604-b142-a5d4284a311c node DatanodeRegistration(127.0.0.1:52407, datanodeUuid=32930b8d-27fc-4d06-8858-24b922636671, infoPort=52410, infoSecurePort=0, ipcPort=52411, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:57,762 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:57,763 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0 node DatanodeRegistration(127.0.0.1:52416, datanodeUuid=bd32d46d-6962-42fa-9a28-5f84d4a16693, infoPort=52418, infoSecurePort=0, ipcPort=52419, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:57,763 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270 node DatanodeRegistration(127.0.0.1:52416, datanodeUuid=bd32d46d-6962-42fa-9a28-5f84d4a16693, infoPort=52418, infoSecurePort=0, ipcPort=52419, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:57,766 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52421_datanode____pr9uvx/webapp 2016-12-02 15:28:57,838 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52421 2016-12-02 15:28:57,935 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:57,937 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083 node DatanodeRegistration(127.0.0.1:52420, datanodeUuid=a0f52948-2a30-4098-ae5a-f778692c8af0, infoPort=52422, infoSecurePort=0, ipcPort=52423, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:57,937 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-7ccb6605-886f-405c-ad06-219ad508d964 node DatanodeRegistration(127.0.0.1:52420, datanodeUuid=a0f52948-2a30-4098-ae5a-f778692c8af0, infoPort=52422, infoSecurePort=0, ipcPort=52423, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:57,938 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52425_datanode____6cja1t/webapp 2016-12-02 15:28:58,008 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52425 2016-12-02 15:28:58,109 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:58,110 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-7977f021-6728-4c15-9596-ae0129596140 node DatanodeRegistration(127.0.0.1:52424, datanodeUuid=8ec5da31-8fea-4bc3-9095-c0f196238942, infoPort=52426, infoSecurePort=0, ipcPort=52427, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2016-12-02 15:28:58,111 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242 node DatanodeRegistration(127.0.0.1:52424, datanodeUuid=8ec5da31-8fea-4bc3-9095-c0f196238942, infoPort=52426, infoSecurePort=0, ipcPort=52427, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:58,112 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52429_datanode____.d27asb/webapp 2016-12-02 15:28:58,189 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52429 2016-12-02 15:28:58,290 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:58,291 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1 node DatanodeRegistration(127.0.0.1:52428, datanodeUuid=c2a456be-b211-4171-a22f-943beeeac333, infoPort=52430, infoSecurePort=0, ipcPort=52431, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:58,291 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-acaec845-0744-4b60-8e8f-289bfadf69f9 node DatanodeRegistration(127.0.0.1:52428, datanodeUuid=c2a456be-b211-4171-a22f-943beeeac333, infoPort=52430, infoSecurePort=0, ipcPort=52431, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:58,293 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52433_datanode____7owoxq/webapp 2016-12-02 15:28:58,359 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52433 2016-12-02 15:28:58,459 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:58,462 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-7b1dc621-03e2-4208-a089-45882edf6203 node DatanodeRegistration(127.0.0.1:52432, datanodeUuid=4f1b192e-7268-4d48-b256-8c2fb21cfe81, infoPort=52434, infoSecurePort=0, ipcPort=52435, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:58,462 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7 node DatanodeRegistration(127.0.0.1:52432, datanodeUuid=4f1b192e-7268-4d48-b256-8c2fb21cfe81, infoPort=52434, infoSecurePort=0, ipcPort=52435, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:58,464 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52437_datanode____.bptvwe/webapp 2016-12-02 15:28:58,534 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52437 2016-12-02 15:28:58,639 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:28:58,642 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-cc7f8b08-497e-4564-9350-b8bad4875d61 node DatanodeRegistration(127.0.0.1:52436, datanodeUuid=9caba0a3-d6ac-4110-b857-c6cd9fe9c5a0, infoPort=52438, infoSecurePort=0, ipcPort=52439, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs 2016-12-02 15:28:58,643 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-9ec9db34-4e19-4191-b144-62275f2077e0 node DatanodeRegistration(127.0.0.1:52436, datanodeUuid=9caba0a3-d6ac-4110-b857-c6cd9fe9c5a0, infoPort=52438, infoSecurePort=0, ipcPort=52439, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:58,643 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52441_datanode____91a3tn/webapp 2016-12-02 15:28:58,716 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52441 2016-12-02 15:28:58,823 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-491a9a80-1bde-48c2-bd71-6e53e8242d74 node DatanodeRegistration(127.0.0.1:52440, datanodeUuid=b8404b60-9359-40dc-a42c-a42bdfdf63da, infoPort=52442, infoSecurePort=0, ipcPort=52443, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2016-12-02 15:28:58,824 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897 node DatanodeRegistration(127.0.0.1:52440, datanodeUuid=b8404b60-9359-40dc-a42c-a42bdfdf63da, infoPort=52442, infoSecurePort=0, ipcPort=52443, storageInfo=lv=-56;cid=testClusterID;nsid=972164534;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:28:58,990 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:28:58,992 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:28:59,157 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW]]} size 7 2016-12-02 15:28:59,157 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741825_1001 size 7 2016-12-02 15:28:59,158 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741825_1001 size 7 2016-12-02 15:28:59,570 INFO [main] util.FSUtils(760): Created version file at hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd with version=8 2016-12-02 15:28:59,570 INFO [main] hbase.HBaseTestingUtility(1283): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-12-02 15:29:00,011 INFO [main] client.ConnectionUtils(128): master//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,038 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,039 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,039 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,040 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,044 INFO [main] ipc.RpcServer$Listener(801): master//10.10.9.179:0: started 3 reader(s) listening on port=52448 2016-12-02 15:29:00,084 INFO [main] hfile.CacheConfig(588): Allocating LruBlockCache size=995.60 MB, blockSize=64 KB 2016-12-02 15:29:00,092 DEBUG [main] hfile.CacheConfig(603): Trying to use Internal l2 cache 2016-12-02 15:29:00,092 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,093 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,097 INFO [main] mob.MobFileCache(121): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2016-12-02 15:29:00,099 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,131 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:52448 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,137 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:524480x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,137 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): master:52448-0x158c1de825b0004 connected 2016-12-02 15:29:00,138 DEBUG [main] zookeeper.RecoverableZooKeeper(584): Node /1 already exists 2016-12-02 15:29:00,158 DEBUG [main] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-12-02 15:29:00,159 DEBUG [main] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,167 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,167 INFO [RpcServer.listener,port=52448] ipc.RpcServer$Listener(882): RpcServer.listener,port=52448: starting 2016-12-02 15:29:00,167 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448 2016-12-02 15:29:00,168 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448 2016-12-02 15:29:00,168 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448 2016-12-02 15:29:00,169 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52448 2016-12-02 15:29:00,169 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52448 2016-12-02 15:29:00,169 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52448 2016-12-02 15:29:00,169 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52448 2016-12-02 15:29:00,169 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52448 2016-12-02 15:29:00,170 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52448 2016-12-02 15:29:00,183 INFO [main] master.HMaster(416): hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd, hbase.cluster.distributed=false 2016-12-02 15:29:00,199 INFO [main] master.HMaster(1840): Adding backup master ZNode /1/backup-masters/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,209 DEBUG [main] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/backup-masters/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,222 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2016-12-02 15:29:00,223 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,223 INFO [10.10.9.179:52448.activeMasterManager] master.ActiveMasterManager(171): Deleting ZNode for /1/backup-masters/10.10.9.179,52448,1480721340079 from backup master directory 2016-12-02 15:29:00,225 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,225 DEBUG [main-EventThread] master.ActiveMasterManager(127): A master is now available 2016-12-02 15:29:00,226 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/backup-masters/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,227 WARN [10.10.9.179:52448.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:00,227 INFO [10.10.9.179:52448.activeMasterManager] master.ActiveMasterManager(180): Registered Active Master=10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,269 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,269 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,269 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,270 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,270 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,271 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52450 2016-12-02 15:29:00,275 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,275 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,276 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,277 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52450 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,283 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524500x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,284 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52450-0x158c1de825b0005 connected 2016-12-02 15:29:00,284 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,285 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,288 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,288 INFO [RpcServer.listener,port=52450] ipc.RpcServer$Listener(882): RpcServer.listener,port=52450: starting 2016-12-02 15:29:00,288 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52450 2016-12-02 15:29:00,289 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52450 2016-12-02 15:29:00,289 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52450 2016-12-02 15:29:00,292 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52450 2016-12-02 15:29:00,292 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52450 2016-12-02 15:29:00,292 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52450 2016-12-02 15:29:00,293 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52450 2016-12-02 15:29:00,293 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52450 2016-12-02 15:29:00,293 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52450 2016-12-02 15:29:00,294 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52450 2016-12-02 15:29:00,294 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52450 2016-12-02 15:29:00,294 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52450 2016-12-02 15:29:00,294 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52450 2016-12-02 15:29:00,295 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52450 2016-12-02 15:29:00,307 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,307 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,307 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,308 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,308 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,309 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52454 2016-12-02 15:29:00,310 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,311 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,311 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,313 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52454 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,317 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524540x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,318 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52454-0x158c1de825b0006 connected 2016-12-02 15:29:00,318 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,319 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,322 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,322 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52454 2016-12-02 15:29:00,322 INFO [RpcServer.listener,port=52454] ipc.RpcServer$Listener(882): RpcServer.listener,port=52454: starting 2016-12-02 15:29:00,322 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52454 2016-12-02 15:29:00,323 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52454 2016-12-02 15:29:00,324 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52454 2016-12-02 15:29:00,325 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52454 2016-12-02 15:29:00,325 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52454 2016-12-02 15:29:00,325 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52454 2016-12-02 15:29:00,325 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52454 2016-12-02 15:29:00,325 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52454 2016-12-02 15:29:00,346 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,346 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,346 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,346 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,346 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,349 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52460 2016-12-02 15:29:00,349 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:00,351 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:00,351 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,352 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,354 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:00,355 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,357 DEBUG [10.10.9.179:52448.activeMasterManager] util.FSUtils(912): Created cluster ID file at hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/hbase.id with ID: bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,359 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52460 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,362 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524600x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,363 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52460-0x158c1de825b0007 connected 2016-12-02 15:29:00,363 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,364 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,368 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,368 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52460 2016-12-02 15:29:00,368 INFO [RpcServer.listener,port=52460] ipc.RpcServer$Listener(882): RpcServer.listener,port=52460: starting 2016-12-02 15:29:00,369 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52460 2016-12-02 15:29:00,370 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52460 2016-12-02 15:29:00,370 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52460 2016-12-02 15:29:00,370 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52460 2016-12-02 15:29:00,370 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52460 2016-12-02 15:29:00,371 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52460 2016-12-02 15:29:00,371 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52460 2016-12-02 15:29:00,371 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52460 2016-12-02 15:29:00,371 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52460 2016-12-02 15:29:00,371 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460 2016-12-02 15:29:00,372 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52460 2016-12-02 15:29:00,372 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52460 2016-12-02 15:29:00,372 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52460 2016-12-02 15:29:00,386 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,386 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,386 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,386 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,386 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,387 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52464 2016-12-02 15:29:00,389 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,389 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,390 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,391 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52464 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,395 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524640x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,395 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52464-0x158c1de825b0008 connected 2016-12-02 15:29:00,395 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,396 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,400 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,400 INFO [RpcServer.listener,port=52464] ipc.RpcServer$Listener(882): RpcServer.listener,port=52464: starting 2016-12-02 15:29:00,400 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52464 2016-12-02 15:29:00,401 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52464 2016-12-02 15:29:00,402 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52464 2016-12-02 15:29:00,403 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52464 2016-12-02 15:29:00,403 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52464 2016-12-02 15:29:00,403 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52464 2016-12-02 15:29:00,403 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52464 2016-12-02 15:29:00,403 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52464 2016-12-02 15:29:00,404 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52464 2016-12-02 15:29:00,414 INFO [10.10.9.179:52448.activeMasterManager] master.MasterFileSystem(348): BOOTSTRAP: creating hbase:meta region 2016-12-02 15:29:00,416 INFO [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(6406): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'rep_meta', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'rep_position', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'table', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd Table name == hbase:meta 2016-12-02 15:29:00,416 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,417 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,417 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,417 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,417 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,418 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52467 2016-12-02 15:29:00,421 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,422 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,422 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,423 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52467 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,427 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524670x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,428 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52467-0x158c1de825b0009 connected 2016-12-02 15:29:00,429 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,430 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,433 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,433 INFO [RpcServer.listener,port=52467] ipc.RpcServer$Listener(882): RpcServer.listener,port=52467: starting 2016-12-02 15:29:00,433 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52467 2016-12-02 15:29:00,434 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52467 2016-12-02 15:29:00,434 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52467 2016-12-02 15:29:00,435 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52467 2016-12-02 15:29:00,435 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52467 2016-12-02 15:29:00,435 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52467 2016-12-02 15:29:00,435 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52467 2016-12-02 15:29:00,435 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52467 2016-12-02 15:29:00,436 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52467 2016-12-02 15:29:00,467 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 0 2016-12-02 15:29:00,467 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,468 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,468 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,468 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,469 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,469 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 0 2016-12-02 15:29:00,474 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 0 2016-12-02 15:29:00,474 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52473 2016-12-02 15:29:00,477 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,477 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,478 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,479 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52473 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,482 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524730x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,483 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52473-0x158c1de825b000a connected 2016-12-02 15:29:00,483 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,483 DEBUG [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(743): Instantiated hbase:meta,,1.1588230740 2016-12-02 15:29:00,484 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,488 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,488 INFO [RpcServer.listener,port=52473] ipc.RpcServer$Listener(882): RpcServer.listener,port=52473: starting 2016-12-02 15:29:00,488 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52473 2016-12-02 15:29:00,489 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52473 2016-12-02 15:29:00,489 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52473 2016-12-02 15:29:00,489 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52473 2016-12-02 15:29:00,489 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52473 2016-12-02 15:29:00,490 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52473 2016-12-02 15:29:00,490 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52473 2016-12-02 15:29:00,490 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52473 2016-12-02 15:29:00,490 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52473 2016-12-02 15:29:00,491 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52473 2016-12-02 15:29:00,491 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473 2016-12-02 15:29:00,491 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52473 2016-12-02 15:29:00,491 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52473 2016-12-02 15:29:00,491 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52473 2016-12-02 15:29:00,503 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,504 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,504 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,504 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,504 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,505 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52476 2016-12-02 15:29:00,507 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,507 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,508 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,509 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52476 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,513 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524760x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,514 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52476-0x158c1de825b000b connected 2016-12-02 15:29:00,514 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,517 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,521 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,521 INFO [RpcServer.listener,port=52476] ipc.RpcServer$Listener(882): RpcServer.listener,port=52476: starting 2016-12-02 15:29:00,521 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52476 2016-12-02 15:29:00,522 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52476 2016-12-02 15:29:00,522 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52476 2016-12-02 15:29:00,523 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52476 2016-12-02 15:29:00,523 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52476 2016-12-02 15:29:00,523 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52476 2016-12-02 15:29:00,523 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52476 2016-12-02 15:29:00,523 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52476 2016-12-02 15:29:00,524 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52476 2016-12-02 15:29:00,536 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,537 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,537 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,537 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,537 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,538 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52479 2016-12-02 15:29:00,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:00,539 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,539 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,540 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,541 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,542 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52479 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,544 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524790x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,546 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52479-0x158c1de825b000c connected 2016-12-02 15:29:00,546 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,546 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,550 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,550 INFO [RpcServer.listener,port=52479] ipc.RpcServer$Listener(882): RpcServer.listener,port=52479: starting 2016-12-02 15:29:00,550 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52479 2016-12-02 15:29:00,550 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52479 2016-12-02 15:29:00,551 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52479 2016-12-02 15:29:00,551 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52479 2016-12-02 15:29:00,551 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52479 2016-12-02 15:29:00,551 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52479 2016-12-02 15:29:00,551 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52479 2016-12-02 15:29:00,552 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52479 2016-12-02 15:29:00,552 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52479 2016-12-02 15:29:00,552 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52479 2016-12-02 15:29:00,552 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52479 2016-12-02 15:29:00,552 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52479 2016-12-02 15:29:00,553 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52479 2016-12-02 15:29:00,553 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52479 2016-12-02 15:29:00,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:00,565 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,566 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,566 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,566 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,566 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,567 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52482 2016-12-02 15:29:00,570 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,570 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,570 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,571 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52482 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,574 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524820x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52482-0x158c1de825b000d connected 2016-12-02 15:29:00,576 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,576 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,577 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:00,577 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,577 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:00,581 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,581 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52482 2016-12-02 15:29:00,581 INFO [RpcServer.listener,port=52482] ipc.RpcServer$Listener(882): RpcServer.listener,port=52482: starting 2016-12-02 15:29:00,582 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52482 2016-12-02 15:29:00,583 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52482 2016-12-02 15:29:00,583 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52482 2016-12-02 15:29:00,583 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52482 2016-12-02 15:29:00,584 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52482 2016-12-02 15:29:00,584 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52482 2016-12-02 15:29:00,584 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52482 2016-12-02 15:29:00,584 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52482 2016-12-02 15:29:00,584 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:00,585 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52482 2016-12-02 15:29:00,585 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_meta: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,585 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52482 2016-12-02 15:29:00,585 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:00,585 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52482 2016-12-02 15:29:00,586 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52482 2016-12-02 15:29:00,587 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52482 2016-12-02 15:29:00,589 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:00,590 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_position: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,590 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:00,593 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:00,594 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,594 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:00,601 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:00,602 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:00,602 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:00,602 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:00,602 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:00,603 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52485 2016-12-02 15:29:00,605 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,605 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:00,605 DEBUG [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740 2016-12-02 15:29:00,606 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,607 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52485 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,610 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:524850x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,610 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52485-0x158c1de825b000e connected 2016-12-02 15:29:00,610 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/master 2016-12-02 15:29:00,611 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:00,614 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:00,614 INFO [RpcServer.listener,port=52485] ipc.RpcServer$Listener(882): RpcServer.listener,port=52485: starting 2016-12-02 15:29:00,614 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52485 2016-12-02 15:29:00,615 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52485 2016-12-02 15:29:00,616 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52485 2016-12-02 15:29:00,616 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52485 2016-12-02 15:29:00,616 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52485 2016-12-02 15:29:00,616 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52485 2016-12-02 15:29:00,616 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52485 2016-12-02 15:29:00,617 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52485 2016-12-02 15:29:00,628 INFO [M:0;10.10.9.179:52448] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x54c0d628 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,628 INFO [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3cf0ded7 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,628 INFO [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cc89744 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,628 INFO [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x716880db connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,629 INFO [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5ffdde77 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,628 INFO [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x55d54b96 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,628 INFO [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x50fed05b connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,629 INFO [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x26152aea connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,629 INFO [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7130b450 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,629 INFO [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x316a16ab connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,629 INFO [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7821a1f connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:00,632 DEBUG [10.10.9.179:52448.activeMasterManager] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (26843545) instead 2016-12-02 15:29:00,642 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x55d54b960x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,643 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x55d54b96-0x158c1de825b000f connected 2016-12-02 15:29:00,643 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x3cf0ded70x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,644 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x3cf0ded7-0x158c1de825b0010 connected 2016-12-02 15:29:00,644 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x4cc897440x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,644 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x4cc89744-0x158c1de825b0011 connected 2016-12-02 15:29:00,644 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x716880db0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,645 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x716880db-0x158c1de825b0012 connected 2016-12-02 15:29:00,645 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x26152aea0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,646 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x26152aea-0x158c1de825b0013 connected 2016-12-02 15:29:00,646 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x5ffdde770x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,647 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x5ffdde77-0x158c1de825b0014 connected 2016-12-02 15:29:00,648 DEBUG [M:0;10.10.9.179:52448-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x54c0d6280x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,648 INFO [RS:6;10.10.9.179:52476] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,649 DEBUG [RS:6;10.10.9.179:52476] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,648 DEBUG [M:0;10.10.9.179:52448-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x54c0d628-0x158c1de825b0015 connected 2016-12-02 15:29:00,649 INFO [RS:7;10.10.9.179:52479] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,649 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14e841ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,649 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x50fed05b0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,649 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7130b4500x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,650 INFO [RS:1;10.10.9.179:52454] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,649 DEBUG [RS:7;10.10.9.179:52479] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 DEBUG [RS:1;10.10.9.179:52454] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7130b450-0x158c1de825b0017 connected 2016-12-02 15:29:00,650 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x50fed05b-0x158c1de825b0016 connected 2016-12-02 15:29:00,652 INFO [RS:4;10.10.9.179:52467] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,652 INFO [RS:8;10.10.9.179:52482] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,652 DEBUG [RS:8;10.10.9.179:52482] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x316a16ab0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,652 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4630fc34, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,652 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3608b5a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,652 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7821a1f0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:00,653 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7821a1f-0x158c1de825b0018 connected 2016-12-02 15:29:00,653 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ec0d44c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,653 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x316a16ab-0x158c1de825b0019 connected 2016-12-02 15:29:00,652 INFO [RS:3;10.10.9.179:52464] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,653 DEBUG [RS:3;10.10.9.179:52464] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 INFO [RS:5;10.10.9.179:52473] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,653 DEBUG [RS:5;10.10.9.179:52473] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 INFO [RS:2;10.10.9.179:52460] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,654 DEBUG [RS:2;10.10.9.179:52460] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 INFO [RS:9;10.10.9.179:52485] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,654 DEBUG [RS:9;10.10.9.179:52485] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 INFO [M:0;10.10.9.179:52448] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,654 DEBUG [M:0;10.10.9.179:52448] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,652 DEBUG [RS:4;10.10.9.179:52467] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,654 DEBUG [M:0;10.10.9.179:52448] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b3a38fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,652 INFO [RS:0;10.10.9.179:52450] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:00,655 DEBUG [RS:0;10.10.9.179:52450] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:00,654 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30558ae0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,655 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ced419a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,654 DEBUG [10.10.9.179:52448.activeMasterManager] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:00,654 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a06bb2b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,654 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bc8df20, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,654 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d7a716f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,653 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b927ddb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:00,655 INFO [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(893): Onlined 1588230740; next sequenceid=2 2016-12-02 15:29:00,656 DEBUG [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(1486): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-12-02 15:29:00,656 DEBUG [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(1525): Updates disabled for region hbase:meta,,1.1588230740 2016-12-02 15:29:00,659 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:00,660 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_barrier 2016-12-02 15:29:00,660 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_meta 2016-12-02 15:29:00,660 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_position 2016-12-02 15:29:00,660 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed table 2016-12-02 15:29:00,661 INFO [10.10.9.179:52448.activeMasterManager] regionserver.HRegion(1643): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:00,699 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:00,702 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:00,704 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|FINALIZED]]} size 0 2016-12-02 15:29:00,707 DEBUG [10.10.9.179:52448.activeMasterManager] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:00,731 INFO [10.10.9.179:52448.activeMasterManager] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:00,753 INFO [10.10.9.179:52448.activeMasterManager] coordination.ZKSplitLogManagerCoordination(586): Found 0 orphan tasks and 0 rescan nodes 2016-12-02 15:29:00,754 DEBUG [10.10.9.179:52448.activeMasterManager] util.FSTableDescriptors(283): Fetching table descriptors from the filesystem. 2016-12-02 15:29:00,795 INFO [10.10.9.179:52448.activeMasterManager] balancer.StochasticLoadBalancer(160): loading config 2016-12-02 15:29:00,812 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/balancer 2016-12-02 15:29:00,813 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/normalizer 2016-12-02 15:29:00,818 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/split 2016-12-02 15:29:00,819 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/switch/merge 2016-12-02 15:29:00,853 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,853 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,853 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,853 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,854 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2016-12-02 15:29:00,855 INFO [10.10.9.179:52448.activeMasterManager] master.HMaster(646): Server active/primary master=10.10.9.179,52448,1480721340079, sessionid=0x158c1de825b0004, setting cluster-up flag (Was=false) 2016-12-02 15:29:00,858 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,859 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,859 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,860 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(832): ClusterId : bec31e96-5e53-44a0-979b-2eef7e7b4feb 2016-12-02 15:29:00,867 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,868 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,867 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,867 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,867 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:00,874 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,874 DEBUG [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,874 DEBUG [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,874 DEBUG [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,875 DEBUG [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,874 DEBUG [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,875 DEBUG [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,875 DEBUG [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,875 DEBUG [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,875 DEBUG [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc already exists 2016-12-02 15:29:00,876 DEBUG [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,877 DEBUG [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,877 DEBUG [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,877 DEBUG [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,876 DEBUG [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,879 DEBUG [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,879 DEBUG [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/abort already exists 2016-12-02 15:29:00,882 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,883 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,883 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,883 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,883 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,883 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,883 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,882 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:00,882 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,885 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot already exists 2016-12-02 15:29:00,885 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,884 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:00,885 DEBUG [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [M:0;10.10.9.179:52448] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot already exists 2016-12-02 15:29:00,885 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,886 DEBUG [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,885 DEBUG [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,887 DEBUG [M:0;10.10.9.179:52448] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,887 DEBUG [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/reached already exists 2016-12-02 15:29:00,889 DEBUG [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/abort already exists 2016-12-02 15:29:00,889 DEBUG [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/abort already exists 2016-12-02 15:29:00,889 DEBUG [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/abort already exists 2016-12-02 15:29:00,889 DEBUG [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/abort already exists 2016-12-02 15:29:00,889 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,890 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,889 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,890 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,890 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,890 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:00,891 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /1/flush-table-proc/acquired already exists 2016-12-02 15:29:00,891 INFO [10.10.9.179:52448.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/flush-table-proc/acquired /1/flush-table-proc/reached /1/flush-table-proc/abort 2016-12-02 15:29:00,893 DEBUG [10.10.9.179:52448.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(246): Starting the controller for procedure member:10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,895 INFO [RS:1;10.10.9.179:52454] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:6;10.10.9.179:52476] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:8;10.10.9.179:52482] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [M:0;10.10.9.179:52448] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:0;10.10.9.179:52450] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:3;10.10.9.179:52464] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:9;10.10.9.179:52485] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:7;10.10.9.179:52479] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:5;10.10.9.179:52473] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,895 INFO [RS:4;10.10.9.179:52467] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,898 INFO [RS:2;10.10.9.179:52460] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:00,899 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /1/online-snapshot/acquired already exists 2016-12-02 15:29:00,900 INFO [10.10.9.179:52448.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2016-12-02 15:29:00,901 DEBUG [10.10.9.179:52448.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(246): Starting the controller for procedure member:10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,911 INFO [RS:9;10.10.9.179:52485] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:6;10.10.9.179:52476] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [M:0;10.10.9.179:52448] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:7;10.10.9.179:52479] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:3;10.10.9.179:52464] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,912 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,911 INFO [RS:0;10.10.9.179:52450] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:2;10.10.9.179:52460] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:8;10.10.9.179:52482] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:4;10.10.9.179:52467] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:1;10.10.9.179:52454] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,911 INFO [RS:5;10.10.9.179:52473] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:00,913 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,913 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,913 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,913 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,912 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,912 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,912 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,912 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,912 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,913 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:00,918 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ed1d237, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,918 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f24fdb7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@285711eb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38b65d26, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31e59b4e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,918 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@507a75af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@631d95d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ee5a172, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [M:0;10.10.9.179:52448] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c6a6c38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74c7bee2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,919 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e6bc840, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:00,929 DEBUG [RS:4;10.10.9.179:52467] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:4;10.10.9.179:52467 2016-12-02 15:29:00,930 DEBUG [RS:6;10.10.9.179:52476] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:6;10.10.9.179:52476 2016-12-02 15:29:00,929 DEBUG [RS:1;10.10.9.179:52454] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:1;10.10.9.179:52454 2016-12-02 15:29:00,929 DEBUG [RS:8;10.10.9.179:52482] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:8;10.10.9.179:52482 2016-12-02 15:29:00,929 DEBUG [RS:0;10.10.9.179:52450] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.10.9.179:52450 2016-12-02 15:29:00,930 DEBUG [RS:7;10.10.9.179:52479] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:7;10.10.9.179:52479 2016-12-02 15:29:00,929 DEBUG [RS:3;10.10.9.179:52464] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:3;10.10.9.179:52464 2016-12-02 15:29:00,929 DEBUG [RS:5;10.10.9.179:52473] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:5;10.10.9.179:52473 2016-12-02 15:29:00,929 DEBUG [M:0;10.10.9.179:52448] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.10.9.179:52448 2016-12-02 15:29:00,929 DEBUG [RS:2;10.10.9.179:52460] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:2;10.10.9.179:52460 2016-12-02 15:29:00,929 DEBUG [RS:9;10.10.9.179:52485] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:9;10.10.9.179:52485 2016-12-02 15:29:00,943 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:00,944 DEBUG [RS:1;10.10.9.179:52454] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:00,944 DEBUG [RS:5;10.10.9.179:52473] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:00,944 DEBUG [RS:6;10.10.9.179:52476] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:00,944 DEBUG [RS:2;10.10.9.179:52460] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:00,944 DEBUG [RS:0;10.10.9.179:52450] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:00,944 DEBUG [RS:9;10.10.9.179:52485] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:00,944 DEBUG [RS:7;10.10.9.179:52479] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:00,944 DEBUG [RS:4;10.10.9.179:52467] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:00,944 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:00,944 DEBUG [M:0;10.10.9.179:52448] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,944 DEBUG [RS:3;10.10.9.179:52464] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:00,944 DEBUG [RS:8;10.10.9.179:52482] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:00,945 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:00,945 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:00,946 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:00,946 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:00,946 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:00,946 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:00,947 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:00,947 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:00,947 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,948 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:00,949 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:00,949 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:00,950 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:00,950 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:00,950 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:00,951 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:00,951 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:00,952 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:00,952 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:00,952 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:00,953 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:00,958 INFO [RS:0;10.10.9.179:52450] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [RS:6;10.10.9.179:52476] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:6;10.10.9.179:52476] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:3;10.10.9.179:52464] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:3;10.10.9.179:52464] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:2;10.10.9.179:52460] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:2;10.10.9.179:52460] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [RS:5;10.10.9.179:52473] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:5;10.10.9.179:52473] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [M:0;10.10.9.179:52448] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [M:0;10.10.9.179:52448] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [RS:1;10.10.9.179:52454] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:1;10.10.9.179:52454] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:8;10.10.9.179:52482] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:8;10.10.9.179:52482] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [RS:7;10.10.9.179:52479] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,961 INFO [RS:7;10.10.9.179:52479] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:0;10.10.9.179:52450] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,960 INFO [RS:9;10.10.9.179:52485] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,962 INFO [RS:9;10.10.9.179:52485] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,958 INFO [RS:4;10.10.9.179:52467] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:00,962 INFO [RS:4;10.10.9.179:52467] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:00,962 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52448, startcode=1480721340079 2016-12-02 15:29:00,964 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52485, startcode=1480721340604 2016-12-02 15:29:00,964 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52450, startcode=1480721340274 2016-12-02 15:29:00,964 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52464, startcode=1480721340388 2016-12-02 15:29:00,964 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52460, startcode=1480721340350 2016-12-02 15:29:00,964 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52473, startcode=1480721340476 2016-12-02 15:29:00,964 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52467, startcode=1480721340421 2016-12-02 15:29:00,964 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52476, startcode=1480721340506 2016-12-02 15:29:00,964 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52482, startcode=1480721340569 2016-12-02 15:29:00,964 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52479, startcode=1480721340539 2016-12-02 15:29:00,964 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52454, startcode=1480721340310 2016-12-02 15:29:00,968 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(2484): Master is not running yet 2016-12-02 15:29:00,968 WARN [M:0;10.10.9.179:52448] regionserver.HRegionServer(970): reportForDuty failed; sleeping and then retrying. 2016-12-02 15:29:00,999 INFO [10.10.9.179:52448.activeMasterManager] master.MasterCoprocessorHost(101): System coprocessor loading is enabled 2016-12-02 15:29:01,001 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.10.9.179:52448, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:01,001 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.10.9.179:52448, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:01,001 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.10.9.179:52448, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:01,001 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.10.9.179:52448, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:01,002 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.10.9.179:52448, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,002 DEBUG [10.10.9.179:52448.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.10.9.179:52448, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,036 INFO [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=8 2016-12-02 15:29:01,038 INFO [10.10.9.179:52448.activeMasterManager] wal.WALProcedureStore(299): Starting WAL Procedure Store lease recovery 2016-12-02 15:29:01,047 DEBUG [10.10.9.179:52448.activeMasterManager] wal.WALProcedureStore(922): Roll new state log: 1 2016-12-02 15:29:01,048 INFO [10.10.9.179:52448.activeMasterManager] wal.WALProcedureStore(328): Lease acquired for flushLogId: 1 2016-12-02 15:29:01,049 INFO [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(508): recover procedure store (WALProcedureStore) lease: 10msec 2016-12-02 15:29:01,050 DEBUG [10.10.9.179:52448.activeMasterManager] wal.WALProcedureStore(345): No state logs to replay. 2016-12-02 15:29:01,051 DEBUG [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor$1(283): load procedures maxProcId=0 2016-12-02 15:29:01,051 INFO [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(522): load procedure store (WALProcedureStore): 2msec 2016-12-02 15:29:01,051 DEBUG [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(526): start workers 8 2016-12-02 15:29:01,057 DEBUG [10.10.9.179:52448.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-12-02 15:29:01,058 INFO [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,063 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): replicationLogCleaner0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,064 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): replicationLogCleaner-0x158c1de825b001a connected 2016-12-02 15:29:01,065 DEBUG [10.10.9.179:52448.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-12-02 15:29:01,066 DEBUG [10.10.9.179:52448.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-12-02 15:29:01,071 DEBUG [10.10.9.179:52448.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-12-02 15:29:01,072 DEBUG [10.10.9.179:52448.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-12-02 15:29:01,073 INFO [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x625d36f3 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,077 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x625d36f30x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,077 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x625d36f3-0x158c1de825b001b connected 2016-12-02 15:29:01,078 DEBUG [10.10.9.179:52448.activeMasterManager] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@206781e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,078 INFO [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=ReplicationAdmin connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,081 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,081 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): ReplicationAdmin-0x158c1de825b001c connected 2016-12-02 15:29:01,099 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(363): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,102 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(363): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,102 INFO [10.10.9.179:52448.activeMasterManager] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,102 INFO [10.10.9.179:52448.activeMasterManager] master.ServerManager(1042): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 10, maximum of 10, timeout of 4500 ms, interval of 1500 ms. 2016-12-02 15:29:01,103 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52448,1480721340079 with port=52448, startcode=1480721340079 2016-12-02 15:29:01,119 DEBUG [RS:7;10.10.9.179:52479] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:3;10.10.9.179:52464] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 INFO [M:0;10.10.9.179:52448] master.ServerManager(453): Registering server=10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,119 DEBUG [RS:4;10.10.9.179:52467] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:9;10.10.9.179:52485] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:6;10.10.9.179:52476] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:0;10.10.9.179:52450] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:1;10.10.9.179:52454] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:2;10.10.9.179:52460] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:5;10.10.9.179:52473] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,119 DEBUG [RS:8;10.10.9.179:52482] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:01,125 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,125 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,125 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,125 WARN [M:0;10.10.9.179:52448] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,125 INFO [M:0;10.10.9.179:52448] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,125 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,145 DEBUG [RS:3;10.10.9.179:52464] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,146 DEBUG [RS:2;10.10.9.179:52460] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,146 DEBUG [RS:7;10.10.9.179:52479] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,146 DEBUG [RS:1;10.10.9.179:52454] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,146 DEBUG [RS:9;10.10.9.179:52485] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,146 DEBUG [RS:0;10.10.9.179:52450] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,145 DEBUG [RS:6;10.10.9.179:52476] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,145 DEBUG [RS:5;10.10.9.179:52473] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,145 DEBUG [RS:4;10.10.9.179:52467] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,145 DEBUG [RS:8;10.10.9.179:52482] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:01,149 INFO [M:0;10.10.9.179:52448] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,153 INFO [10.10.9.179:52448.activeMasterManager] master.ServerManager(1042): Waiting for region servers count to settle; currently checked in 1, slept for 51 ms, expecting minimum of 10, maximum of 10, timeout of 4500 ms, interval of 1500 ms. 2016-12-02 15:29:01,177 INFO [M:0;10.10.9.179:52448] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,183 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52507; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,183 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52508; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,183 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52509; connections=3, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,183 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52510; connections=4, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52511; connections=5, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52512; connections=6, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52513; connections=7, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52505; connections=8, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52514; connections=9, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,184 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52506; connections=10, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:01,191 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.3 (auth:SIMPLE) 2016-12-02 15:29:01,191 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.4 (auth:SIMPLE) 2016-12-02 15:29:01,191 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.8 (auth:SIMPLE) 2016-12-02 15:29:01,382 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52448, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,382 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52448, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,382 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52448, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,383 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52448, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,383 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52448, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,383 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52448, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,384 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52448, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,384 DEBUG [M:0;10.10.9.179:52448] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52448, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,384 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52509 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,384 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52507 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,384 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52508 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,385 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.1 (auth:SIMPLE) 2016-12-02 15:29:01,385 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.7 (auth:SIMPLE) 2016-12-02 15:29:01,385 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.2 (auth:SIMPLE) 2016-12-02 15:29:01,386 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52511 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52506 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52514 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.6 (auth:SIMPLE) 2016-12-02 15:29:01,386 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.5 (auth:SIMPLE) 2016-12-02 15:29:01,386 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.9 (auth:SIMPLE) 2016-12-02 15:29:01,386 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52512 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52505 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52513 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,386 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.0 (auth:SIMPLE) 2016-12-02 15:29:01,387 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52510 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:01,403 INFO [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,403 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,403 INFO [RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,403 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,403 INFO [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,407 INFO [SplitLogWorker-10.10.9.179:52448] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52448,1480721340079 starting 2016-12-02 15:29:01,412 INFO [M:0;10.10.9.179:52448] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,416 INFO [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,416 INFO [RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,416 INFO [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,417 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,417 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.ServerManager(453): Registering server=10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,417 INFO [M:0;10.10.9.179:52448] regionserver.MemStoreChunkPool(212): Allocating MemStoreChunkPool with chunk size 2 MB, max count 497, initial count 0 2016-12-02 15:29:01,419 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52448,1480721340079, RpcServer on 10.10.9.179/10.10.9.179:52448, sessionid=0x158c1de825b0004 2016-12-02 15:29:01,419 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,419 DEBUG [M:0;10.10.9.179:52448] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,419 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52448,1480721340079' 2016-12-02 15:29:01,419 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,420 DEBUG [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,420 DEBUG [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,421 DEBUG [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,421 DEBUG [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,421 DEBUG [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,420 DEBUG [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,422 DEBUG [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,422 DEBUG [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,420 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,420 DEBUG [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,420 DEBUG [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,420 DEBUG [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,422 DEBUG [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,422 INFO [10.10.9.179:52448.activeMasterManager] master.ServerManager(1059): Finished waiting for region servers count to settle; checked in 11, slept for 320 ms, expecting minimum of 10, maximum of 10, master is running 2016-12-02 15:29:01,422 DEBUG [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,422 DEBUG [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,422 WARN [RS:3;10.10.9.179:52464] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,421 DEBUG [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,421 DEBUG [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,421 DEBUG [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,421 DEBUG [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,421 DEBUG [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd 2016-12-02 15:29:01,421 DEBUG [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,423 DEBUG [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52402 2016-12-02 15:29:01,423 WARN [RS:2;10.10.9.179:52460] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,423 WARN [RS:1;10.10.9.179:52454] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,422 WARN [RS:6;10.10.9.179:52476] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,422 INFO [RS:3;10.10.9.179:52464] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,422 WARN [RS:4;10.10.9.179:52467] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,422 DEBUG [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,422 WARN [RS:9;10.10.9.179:52485] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,422 DEBUG [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,422 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,423 WARN [RS:0;10.10.9.179:52450] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,423 INFO [RS:9;10.10.9.179:52485] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,423 WARN [RS:8;10.10.9.179:52482] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,423 INFO [RS:4;10.10.9.179:52467] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,424 INFO [RS:8;10.10.9.179:52482] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,423 DEBUG [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,423 INFO [RS:6;10.10.9.179:52476] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,423 INFO [RS:1;10.10.9.179:52454] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,423 INFO [RS:2;10.10.9.179:52460] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,423 DEBUG [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,423 DEBUG [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:01,425 WARN [RS:5;10.10.9.179:52473] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,425 DEBUG [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,426 INFO [RS:5;10.10.9.179:52473] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,425 DEBUG [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,425 DEBUG [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,424 DEBUG [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,424 DEBUG [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,424 DEBUG [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,424 INFO [RS:0;10.10.9.179:52450] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,424 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,430 DEBUG [M:0;10.10.9.179:52448] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,430 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52448,1480721340079' 2016-12-02 15:29:01,430 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,430 DEBUG [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,431 DEBUG [10.10.9.179:52448.activeMasterManager] master.MasterWalManager(174): No log files to split, proceeding... 2016-12-02 15:29:01,427 DEBUG [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,426 WARN [RS:7;10.10.9.179:52479] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:01,434 DEBUG [M:0;10.10.9.179:52448] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,434 INFO [RS:7;10.10.9.179:52479] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,434 DEBUG [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,437 DEBUG [M:0;10.10.9.179:52448] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,439 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(622): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-12-02 15:29:01,466 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(622): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Unable to get data of znode /1/meta-region-server because node does not exist (not an error) 2016-12-02 15:29:01,466 INFO [10.10.9.179:52448.activeMasterManager] master.MasterMetaBootstrap(188): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-12-02 15:29:01,467 DEBUG [RS:2;10.10.9.179:52460] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,467 DEBUG [RS:3;10.10.9.179:52464] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,467 DEBUG [RS:8;10.10.9.179:52482] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,469 DEBUG [RS:4;10.10.9.179:52467] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,470 DEBUG [RS:6;10.10.9.179:52476] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,471 DEBUG [RS:0;10.10.9.179:52450] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,471 INFO [M:0;10.10.9.179:52448] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,471 DEBUG [RS:3;10.10.9.179:52464] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,471 DEBUG [RS:7;10.10.9.179:52479] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,471 DEBUG [RS:8;10.10.9.179:52482] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,471 DEBUG [RS:6;10.10.9.179:52476] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,471 DEBUG [RS:2;10.10.9.179:52460] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,472 INFO [RS:3;10.10.9.179:52464] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,471 DEBUG [RS:4;10.10.9.179:52467] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,472 INFO [RS:6;10.10.9.179:52476] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,472 INFO [RS:8;10.10.9.179:52482] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,472 DEBUG [RS:0;10.10.9.179:52450] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,472 INFO [RS:2;10.10.9.179:52460] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,472 DEBUG [RS:7;10.10.9.179:52479] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,472 INFO [RS:4;10.10.9.179:52467] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,473 INFO [RS:0;10.10.9.179:52450] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,473 INFO [RS:7;10.10.9.179:52479] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,473 DEBUG [RS:5;10.10.9.179:52473] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,473 DEBUG [RS:1;10.10.9.179:52454] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,473 DEBUG [RS:9;10.10.9.179:52485] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1/peer-state 2016-12-02 15:29:01,474 DEBUG [RS:5;10.10.9.179:52473] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,474 DEBUG [RS:1;10.10.9.179:52454] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,474 DEBUG [RS:9;10.10.9.179:52485] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,474 INFO [RS:5;10.10.9.179:52473] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,474 INFO [RS:1;10.10.9.179:52454] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,474 INFO [RS:9;10.10.9.179:52485] replication.ReplicationPeersZKImpl(451): Added new peer cluster=localhost:60648:/2 2016-12-02 15:29:01,479 DEBUG [10.10.9.179:52448.activeMasterManager] master.AssignmentManager(1321): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.10.9.179,52448,1480721340079; 11 (online=11) available servers, forceNewPlan=false 2016-12-02 15:29:01,479 INFO [10.10.9.179:52448.activeMasterManager] master.AssignmentManager(1105): Assigning hbase:meta,,1.1588230740 to 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,479 INFO [10.10.9.179:52448.activeMasterManager] master.RegionStates(1139): Transition {1588230740 state=OFFLINE, ts=1480721341466, server=null} to {1588230740 state=PENDING_OPEN, ts=1480721341479, server=10.10.9.179,52448,1480721340079} 2016-12-02 15:29:01,479 INFO [10.10.9.179:52448.activeMasterManager] zookeeper.MetaTableLocator(442): Setting hbase:meta region location in ZooKeeper as 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,483 DEBUG [RS:4;10.10.9.179:52467] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:3;10.10.9.179:52464] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:0;10.10.9.179:52450] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:2;10.10.9.179:52460] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:9;10.10.9.179:52485] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:5;10.10.9.179:52473] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:1;10.10.9.179:52454] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 INFO [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x57479bf4 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,483 DEBUG [RS:8;10.10.9.179:52482] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,484 INFO [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x147377a1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,483 DEBUG [RS:6;10.10.9.179:52476] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,483 DEBUG [RS:7;10.10.9.179:52479] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/replication/peers/1 2016-12-02 15:29:01,485 INFO [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7e936139 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,487 INFO [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x11d3cbcb connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,487 INFO [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2fdb87ce connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,489 INFO [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x48767423 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,488 INFO [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x552134b6 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,488 INFO [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3cd0811 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,488 INFO [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x305c2874 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,487 INFO [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65776f35 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,492 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x57479bf40x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,496 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x57479bf4-0x158c1de825b001d connected 2016-12-02 15:29:01,496 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x147377a10x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,497 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x147377a1-0x158c1de825b001e connected 2016-12-02 15:29:01,497 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7e9361390x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,497 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x11d3cbcb0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,498 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7e936139-0x158c1de825b001f connected 2016-12-02 15:29:01,498 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x11d3cbcb-0x158c1de825b0020 connected 2016-12-02 15:29:01,499 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x487674230x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,499 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x48767423-0x158c1de825b0021 connected 2016-12-02 15:29:01,499 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x65776f350x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,500 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x65776f35-0x158c1de825b0022 connected 2016-12-02 15:29:01,500 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x552134b60x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,500 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x305c28740x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,501 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x552134b6-0x158c1de825b0023 connected 2016-12-02 15:29:01,502 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x305c2874-0x158c1de825b0024 connected 2016-12-02 15:29:01,502 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x3cd08110x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,503 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x3cd0811-0x158c1de825b0025 connected 2016-12-02 15:29:01,503 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x2fdb87ce0x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,504 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x2fdb87ce-0x158c1de825b0026 connected 2016-12-02 15:29:01,504 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b201318, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,504 DEBUG [RS:3;10.10.9.179:52464] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,504 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.MetaTableLocator(454): META region location doesn't exist, create it 2016-12-02 15:29:01,504 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37846e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1227d07b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:8;10.10.9.179:52482] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,505 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@732298b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:5;10.10.9.179:52473] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,505 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c273e9f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [10.10.9.179:52448.activeMasterManager] master.ServerManager(968): New admin connection to 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,505 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e074a88, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:7;10.10.9.179:52479] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,505 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6343b639, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,506 DEBUG [RS:6;10.10.9.179:52476] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,505 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72a60767, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54ab0360, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62aaf2c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,505 DEBUG [RS:0;10.10.9.179:52450] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,506 INFO [RS:0;10.10.9.179:52450] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 INFO [RS:5;10.10.9.179:52473] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 INFO [RS:8;10.10.9.179:52482] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 INFO [RS:6;10.10.9.179:52476] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 INFO [RS:5;10.10.9.179:52473] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,506 INFO [RS:3;10.10.9.179:52464] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 INFO [RS:7;10.10.9.179:52479] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,506 DEBUG [RS:1;10.10.9.179:52454] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,506 DEBUG [RS:2;10.10.9.179:52460] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,506 DEBUG [RS:4;10.10.9.179:52467] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,505 DEBUG [RS:9;10.10.9.179:52485] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:01,515 INFO [RS:4;10.10.9.179:52467] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,512 INFO [RS:7;10.10.9.179:52479] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,512 INFO [RS:2;10.10.9.179:52460] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,509 INFO [RS:1;10.10.9.179:52454] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,509 INFO [RS:3;10.10.9.179:52464] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,507 INFO [RS:6;10.10.9.179:52476] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,507 INFO [RS:8;10.10.9.179:52482] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,506 INFO [RS:0;10.10.9.179:52450] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,522 INFO [RS:1;10.10.9.179:52454] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,522 INFO [RS:2;10.10.9.179:52460] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,522 INFO [RS:4;10.10.9.179:52467] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,521 INFO [RS:9;10.10.9.179:52485] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,522 INFO [RS:9;10.10.9.179:52485] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52473, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52473, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52473, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52473, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52473, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52473, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52473, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,525 DEBUG [RS:5;10.10.9.179:52473] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52473, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,551 INFO [10.10.9.179:52448.activeMasterManager] regionserver.RSRpcServices(1772): Open hbase:meta,,1.1588230740 2016-12-02 15:29:01,558 INFO [RS_OPEN_META-10.10.9.179:52448-0] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:01,559 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52464, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,559 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52464, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,559 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52482, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,559 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52476, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,559 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52464, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,559 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52476, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,559 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52476, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,560 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52476, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,560 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52476, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,560 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52476, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,560 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52476, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,560 DEBUG [RS:6;10.10.9.179:52476] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52476, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,559 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52482, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,559 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52464, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,561 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52464, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,561 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52464, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,561 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52464, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,561 DEBUG [RS:3;10.10.9.179:52464] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52464, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,561 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52482, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,561 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52482, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,562 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52482, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,562 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52482, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,562 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52482, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,562 DEBUG [RS:8;10.10.9.179:52482] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52482, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,562 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52454, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52454, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,563 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52454, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52454, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52454, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,563 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52467, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52467, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,563 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52460, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52460, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,564 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52460, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52454, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,564 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52454, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,564 DEBUG [RS:1;10.10.9.179:52454] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52454, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,564 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52460, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,563 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52467, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,564 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52467, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,564 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52467, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,564 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52467, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,564 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52460, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,565 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52460, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,565 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52460, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,565 DEBUG [RS:2;10.10.9.179:52460] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52460, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,565 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52467, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,565 DEBUG [RS:4;10.10.9.179:52467] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52467, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,572 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52450, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,573 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52479, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,573 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52450, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,590 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52485, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,590 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52479, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,590 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52485, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,590 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52450, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52485, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52479, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52485, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52450, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52485, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,591 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52479, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,591 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52485, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,591 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52450, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,592 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52485, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,591 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52479, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:01,592 DEBUG [RS:9;10.10.9.179:52485] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52485, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,592 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52450, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,592 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52479, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:01,592 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52450, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,592 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52479, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:01,598 WARN [RS_OPEN_META-10.10.9.179:52448-0] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:01,594 DEBUG [RS:0;10.10.9.179:52450] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52450, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,609 INFO [RS_OPEN_META-10.10.9.179:52448-0] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52448%2C1480721340079.meta, suffix=.meta, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:01,609 DEBUG [RS:7;10.10.9.179:52479] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52479, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:01,610 INFO [RS:5;10.10.9.179:52473] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7e04c2b9 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,629 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7e04c2b90x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,631 INFO [RS:8;10.10.9.179:52482] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a58eb65 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,631 INFO [RS:2;10.10.9.179:52460] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x288cbb30 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,633 INFO [RS:1;10.10.9.179:52454] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7f042745 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,635 INFO [RS:9;10.10.9.179:52485] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7b06d682 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,637 INFO [RS:4;10.10.9.179:52467] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5f461338 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,639 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7e04c2b9-0x158c1de825b0027 connected 2016-12-02 15:29:01,640 INFO [RS:5;10.10.9.179:52473] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,641 DEBUG [RS:5;10.10.9.179:52473] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,642 INFO [RS:8;10.10.9.179:52482] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,644 DEBUG [RS:8;10.10.9.179:52482] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,642 INFO [RS:1;10.10.9.179:52454] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,644 DEBUG [RS:1;10.10.9.179:52454] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,642 INFO [RS:4;10.10.9.179:52467] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,642 INFO [RS:2;10.10.9.179:52460] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,645 DEBUG [RS:2;10.10.9.179:52460] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,642 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x1a58eb650x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,642 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7f0427450x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,642 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x5f4613380x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,642 INFO [RS:6;10.10.9.179:52476] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6944dbfa connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,641 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x288cbb300x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,645 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@5ce7e76a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,645 DEBUG [RS:4;10.10.9.179:52467] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,644 INFO [RS:9;10.10.9.179:52485] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,644 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7b06d6820x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,646 DEBUG [RS:9;10.10.9.179:52485] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,646 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@7983fab4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,646 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@2657dc73, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,646 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@1c795e4e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,646 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@3aadd89, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,647 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@760fcfae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,651 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7b06d682-0x158c1de825b002c connected 2016-12-02 15:29:01,651 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x288cbb30-0x158c1de825b0028 connected 2016-12-02 15:29:01,651 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x5f461338-0x158c1de825b0029 connected 2016-12-02 15:29:01,651 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7f042745-0x158c1de825b002a connected 2016-12-02 15:29:01,651 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x1a58eb65-0x158c1de825b002b connected 2016-12-02 15:29:01,647 INFO [RS:3;10.10.9.179:52464] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x178417d1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,656 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x6944dbfa0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,658 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x6944dbfa-0x158c1de825b002d connected 2016-12-02 15:29:01,659 INFO [RS:6;10.10.9.179:52476] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,659 DEBUG [RS:6;10.10.9.179:52476] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,659 INFO [RS:3;10.10.9.179:52464] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,659 DEBUG [RS:3;10.10.9.179:52464] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,659 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x178417d10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,659 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x178417d1-0x158c1de825b002e connected 2016-12-02 15:29:01,659 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@e84f7ce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,660 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@139da4e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,662 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,662 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,662 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,663 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,663 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,663 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,663 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,663 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,663 INFO [RS:7;10.10.9.179:52479] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x277e7ba7 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,664 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,665 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,665 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,665 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x277e7ba70x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,665 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,665 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,666 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,665 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x277e7ba7-0x158c1de825b002f connected 2016-12-02 15:29:01,666 INFO [RS:7;10.10.9.179:52479] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,669 INFO [RS:4;10.10.9.179:52467.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:5;10.10.9.179:52473.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:9;10.10.9.179:52485.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:2;10.10.9.179:52460.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:8;10.10.9.179:52482.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:1;10.10.9.179:52454.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 INFO [RS:3;10.10.9.179:52464.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,666 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,666 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,674 INFO [RS:6;10.10.9.179:52476.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,673 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,675 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,673 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,672 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,672 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,670 INFO [RS:0;10.10.9.179:52450] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x30f92e9f connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,669 DEBUG [RS:7;10.10.9.179:52479] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,675 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,675 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,676 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@3f254c3b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,676 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,676 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,677 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,682 INFO [RS:7;10.10.9.179:52479.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,679 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,679 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,678 INFO [RS:0;10.10.9.179:52450] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:01,682 DEBUG [RS:0;10.10.9.179:52450] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:01,678 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x30f92e9f0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,677 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,677 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,677 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,677 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,677 DEBUG [RS:6;10.10.9.179:52476.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,683 DEBUG [RS:6;10.10.9.179:52476.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0030 connected 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,683 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@49e856c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:01,683 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x30f92e9f-0x158c1de825b0031 connected 2016-12-02 15:29:01,682 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,682 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,683 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,687 INFO [RS:0;10.10.9.179:52450.replicationSource,1] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: 1 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:01,688 DEBUG [RS:9;10.10.9.179:52485.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,688 DEBUG [RS:9;10.10.9.179:52485.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0032 connected 2016-12-02 15:29:01,688 DEBUG [RS:7;10.10.9.179:52479.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,688 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,688 DEBUG [RS:7;10.10.9.179:52479.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0033 connected 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,689 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,689 DEBUG [RS:5;10.10.9.179:52473.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,689 DEBUG [RS:5;10.10.9.179:52473.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0034 connected 2016-12-02 15:29:01,690 DEBUG [RS:8;10.10.9.179:52482.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,690 DEBUG [RS:8;10.10.9.179:52482.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0035 connected 2016-12-02 15:29:01,690 DEBUG [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,690 DEBUG [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0036 connected 2016-12-02 15:29:01,690 DEBUG [RS:4;10.10.9.179:52467.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,690 DEBUG [RS:4;10.10.9.179:52467.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0037 connected 2016-12-02 15:29:01,690 DEBUG [RS:1;10.10.9.179:52454.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,690 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,690 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,690 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,690 DEBUG [RS:1;10.10.9.179:52454.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0038 connected 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,691 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,692 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,693 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,693 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,695 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,696 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,698 DEBUG [RS:0;10.10.9.179:52450.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,696 DEBUG [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 10x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:01,704 DEBUG [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b0039 connected 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,696 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,696 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,705 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,705 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,705 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,704 DEBUG [RS:0;10.10.9.179:52450.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(529): connection to cluster: 1-0x158c1de825b003a connected 2016-12-02 15:29:01,707 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,705 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,705 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,705 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,708 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,708 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,708 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,712 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,722 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,722 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,725 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on existing znode=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,725 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52464,1480721340388, 10.10.9.179,52476,1480721340506, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52460,1480721340350, 10.10.9.179,52450,1480721340274, 10.10.9.179,52454,1480721340310, 10.10.9.179,52482,1480721340569] other RSs: [10.10.9.179,52476,1480721340506, 10.10.9.179,52464,1480721340388, 10.10.9.179,52479,1480721340539, 10.10.9.179,52467,1480721340421, 10.10.9.179,52473,1480721340476, 10.10.9.179,52485,1480721340604, 10.10.9.179,52450,1480721340274, 10.10.9.179,52460,1480721340350, 10.10.9.179,52454,1480721340310, 10.10.9.179,52448,1480721340079, 10.10.9.179,52482,1480721340569] 2016-12-02 15:29:01,746 INFO [RS_OPEN_META-10.10.9.179:52448-0] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079/10.10.9.179%2C52448%2C1480721340079.meta.1480721341609.meta 2016-12-02 15:29:01,749 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52403,DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a,DISK], DatanodeInfoWithStorage[127.0.0.1:52428,DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1,DISK], DatanodeInfoWithStorage[127.0.0.1:52420,DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083,DISK]] 2016-12-02 15:29:01,788 INFO [RS:7;10.10.9.179:52479] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,800 INFO [RS:4;10.10.9.179:52467] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,797 INFO [SplitLogWorker-10.10.9.179:52460] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52460,1480721340350 starting 2016-12-02 15:29:01,797 INFO [RS:2;10.10.9.179:52460] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,793 INFO [SplitLogWorker-10.10.9.179:52450] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52450,1480721340274 starting 2016-12-02 15:29:01,793 INFO [RS:0;10.10.9.179:52450] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,792 INFO [SplitLogWorker-10.10.9.179:52464] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52464,1480721340388 starting 2016-12-02 15:29:01,792 INFO [RS:3;10.10.9.179:52464] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,788 INFO [SplitLogWorker-10.10.9.179:52479] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52479,1480721340539 starting 2016-12-02 15:29:01,801 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52450,1480721340274, RpcServer on 10.10.9.179/10.10.9.179:52450, sessionid=0x158c1de825b0005 2016-12-02 15:29:01,801 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52464,1480721340388, RpcServer on 10.10.9.179/10.10.9.179:52464, sessionid=0x158c1de825b0008 2016-12-02 15:29:01,800 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52479,1480721340539, RpcServer on 10.10.9.179/10.10.9.179:52479, sessionid=0x158c1de825b000c 2016-12-02 15:29:01,800 INFO [SplitLogWorker-10.10.9.179:52482] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52482,1480721340569 starting 2016-12-02 15:29:01,800 INFO [RS:8;10.10.9.179:52482] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,800 INFO [SplitLogWorker-10.10.9.179:52485] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52485,1480721340604 starting 2016-12-02 15:29:01,800 INFO [RS:9;10.10.9.179:52485] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,802 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52485,1480721340604, RpcServer on 10.10.9.179/10.10.9.179:52485, sessionid=0x158c1de825b000e 2016-12-02 15:29:01,802 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,803 DEBUG [RS:9;10.10.9.179:52485] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,803 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52485,1480721340604' 2016-12-02 15:29:01,803 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,800 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52467,1480721340421, RpcServer on 10.10.9.179/10.10.9.179:52467, sessionid=0x158c1de825b0009 2016-12-02 15:29:01,803 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,803 DEBUG [RS:4;10.10.9.179:52467] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,803 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52467,1480721340421' 2016-12-02 15:29:01,803 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,800 INFO [SplitLogWorker-10.10.9.179:52467] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52467,1480721340421 starting 2016-12-02 15:29:01,803 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,803 DEBUG [10.10.9.179:52448.activeMasterManager] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721341596}]}} 2016-12-02 15:29:01,802 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52482,1480721340569, RpcServer on 10.10.9.179/10.10.9.179:52482, sessionid=0x158c1de825b000d 2016-12-02 15:29:01,804 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,804 DEBUG [RS:8;10.10.9.179:52482] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,804 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52482,1480721340569' 2016-12-02 15:29:01,804 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,802 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,804 DEBUG [RS:7;10.10.9.179:52479] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,804 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52479,1480721340539' 2016-12-02 15:29:01,804 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,801 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52460,1480721340350, RpcServer on 10.10.9.179/10.10.9.179:52460, sessionid=0x158c1de825b0007 2016-12-02 15:29:01,802 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,805 DEBUG [RS:3;10.10.9.179:52464] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,805 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52464,1480721340388' 2016-12-02 15:29:01,805 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,802 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,805 DEBUG [RS:0;10.10.9.179:52450] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,805 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52450,1480721340274' 2016-12-02 15:29:01,805 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,805 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,805 DEBUG [RS:2;10.10.9.179:52460] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,805 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,804 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,804 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,804 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,806 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,806 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,806 DEBUG [RS:7;10.10.9.179:52479] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52479,1480721340539 2016-12-02 15:29:01,806 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,806 DEBUG [RS:9;10.10.9.179:52485] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52485,1480721340604 2016-12-02 15:29:01,806 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52485,1480721340604' 2016-12-02 15:29:01,806 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,806 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52460,1480721340350' 2016-12-02 15:29:01,806 INFO [SplitLogWorker-10.10.9.179:52476] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52476,1480721340506 starting 2016-12-02 15:29:01,806 INFO [RS:6;10.10.9.179:52476] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,805 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,807 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52476,1480721340506, RpcServer on 10.10.9.179/10.10.9.179:52476, sessionid=0x158c1de825b000b 2016-12-02 15:29:01,805 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,807 DEBUG [RS:9;10.10.9.179:52485] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,807 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,806 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52479,1480721340539' 2016-12-02 15:29:01,808 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,806 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,808 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,806 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,808 DEBUG [RS:4;10.10.9.179:52467] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52467,1480721340421 2016-12-02 15:29:01,808 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52467,1480721340421' 2016-12-02 15:29:01,808 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,808 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,808 DEBUG [RS:9;10.10.9.179:52485] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,808 INFO [RS:9;10.10.9.179:52485] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,808 DEBUG [RS:4;10.10.9.179:52467] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,808 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,809 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,809 DEBUG [RS:3;10.10.9.179:52464] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52464,1480721340388 2016-12-02 15:29:01,809 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52464,1480721340388' 2016-12-02 15:29:01,812 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,807 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,813 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,812 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,809 DEBUG [RS:4;10.10.9.179:52467] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,813 INFO [RS:4;10.10.9.179:52467] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,809 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,813 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,813 DEBUG [RS:2;10.10.9.179:52460] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52460,1480721340350 2016-12-02 15:29:01,808 INFO [SplitLogWorker-10.10.9.179:52454] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52454,1480721340310 starting 2016-12-02 15:29:01,808 INFO [RS:1;10.10.9.179:52454] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,808 DEBUG [RS:7;10.10.9.179:52479] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,808 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,813 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52454,1480721340310, RpcServer on 10.10.9.179/10.10.9.179:52454, sessionid=0x158c1de825b0006 2016-12-02 15:29:01,813 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52460,1480721340350' 2016-12-02 15:29:01,813 DEBUG [RS:3;10.10.9.179:52464] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,813 DEBUG [RS:6;10.10.9.179:52476] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,814 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52476,1480721340506' 2016-12-02 15:29:01,814 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,814 DEBUG [RS:3;10.10.9.179:52464] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,814 INFO [RS:3;10.10.9.179:52464] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,813 DEBUG [RS:0;10.10.9.179:52450] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52450,1480721340274 2016-12-02 15:29:01,814 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,814 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,814 DEBUG [RS:7;10.10.9.179:52479] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,815 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,814 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,813 DEBUG [RS:8;10.10.9.179:52482] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52482,1480721340569 2016-12-02 15:29:01,815 DEBUG [RS:1;10.10.9.179:52454] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,815 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52454,1480721340310' 2016-12-02 15:29:01,815 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,815 DEBUG [RS:2;10.10.9.179:52460] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,815 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,815 DEBUG [RS:6;10.10.9.179:52476] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52476,1480721340506 2016-12-02 15:29:01,815 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52476,1480721340506' 2016-12-02 15:29:01,815 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,815 INFO [RS:7;10.10.9.179:52479] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,814 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52450,1480721340274' 2016-12-02 15:29:01,816 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,815 DEBUG [RS:2;10.10.9.179:52460] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,815 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,815 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52482,1480721340569' 2016-12-02 15:29:01,816 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,816 INFO [RS:2;10.10.9.179:52460] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,816 DEBUG [RS:6;10.10.9.179:52476] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,816 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,816 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,816 DEBUG [RS:1;10.10.9.179:52454] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52454,1480721340310 2016-12-02 15:29:01,817 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52454,1480721340310' 2016-12-02 15:29:01,817 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,816 DEBUG [RS:0;10.10.9.179:52450] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,817 DEBUG [RS:6;10.10.9.179:52476] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,817 INFO [RS:6;10.10.9.179:52476] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,816 DEBUG [RS:8;10.10.9.179:52482] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,817 DEBUG [RS:1;10.10.9.179:52454] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,817 DEBUG [RS:0;10.10.9.179:52450] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,817 INFO [RS:0;10.10.9.179:52450] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,817 DEBUG [RS:8;10.10.9.179:52482] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,817 DEBUG [RS:1;10.10.9.179:52454] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,818 INFO [RS:8;10.10.9.179:52482] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,818 INFO [RS:1;10.10.9.179:52454] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,820 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.HRegion(6583): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:01,840 INFO [RS:5;10.10.9.179:52473] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:01,840 INFO [SplitLogWorker-10.10.9.179:52473] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52473,1480721340476 starting 2016-12-02 15:29:01,841 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52473,1480721340476, RpcServer on 10.10.9.179/10.10.9.179:52473, sessionid=0x158c1de825b000a 2016-12-02 15:29:01,841 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:01,841 DEBUG [RS:5;10.10.9.179:52473] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,841 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52473,1480721340476' 2016-12-02 15:29:01,841 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/flush-table-proc/abort' 2016-12-02 15:29:01,841 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/flush-table-proc/acquired' 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52473,1480721340476 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52473,1480721340476' 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2016-12-02 15:29:01,842 DEBUG [RS:5;10.10.9.179:52473] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2016-12-02 15:29:01,843 DEBUG [RS:5;10.10.9.179:52473] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:01,843 INFO [RS:5;10.10.9.179:52473] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:01,870 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] coprocessor.CoprocessorHost(202): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-12-02 15:29:01,895 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.HRegion(7728): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2016-12-02 15:29:01,898 INFO [RS_OPEN_META-10.10.9.179:52448-0] regionserver.RegionCoprocessorHost(368): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-12-02 15:29:01,907 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-12-02 15:29:01,907 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.HRegion(743): Instantiated hbase:meta,,1.1588230740 2016-12-02 15:29:01,913 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:01,913 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,913 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:01,915 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:01,915 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,915 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:01,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:01,917 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_meta: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,917 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:01,919 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:01,919 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_position: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,920 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:01,921 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:01,921 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:01,922 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:01,927 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740 2016-12-02 15:29:01,930 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (26843545) instead 2016-12-02 15:29:01,935 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-12-02 15:29:01,935 INFO [RS_OPEN_META-10.10.9.179:52448-0] regionserver.HRegion(893): Onlined 1588230740; next sequenceid=3 2016-12-02 15:29:01,974 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1995): Post open deploy tasks for hbase:meta,,1.1588230740 2016-12-02 15:29:01,988 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2949): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1480721341479, server=10.10.9.179,52448,1480721340079} from 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,988 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1139): Transition {1588230740 state=PENDING_OPEN, ts=1480721341479, server=10.10.9.179,52448,1480721340079} to {1588230740 state=OPEN, ts=1480721341988, server=10.10.9.179,52448,1480721340079} 2016-12-02 15:29:01,988 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(442): Setting hbase:meta region location in ZooKeeper as 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,993 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/meta-region-server 2016-12-02 15:29:01,993 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(466): Onlined 1588230740 on 10.10.9.179,52448,1480721340079 2016-12-02 15:29:01,997 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2022): Finished post open deploy task for hbase:meta,,1.1588230740 2016-12-02 15:29:01,998 DEBUG [RS_OPEN_META-10.10.9.179:52448-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.10.9.179,52448,1480721340079 2016-12-02 15:29:02,138 INFO [10.10.9.179:52448.activeMasterManager] hbase.MetaTableAccessor(1768): Updated table hbase:meta state to ENABLED in META 2016-12-02 15:29:02,138 DEBUG [10.10.9.179:52448.activeMasterManager] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721342138}]}} 2016-12-02 15:29:02,145 INFO [10.10.9.179:52448.activeMasterManager] hbase.MetaTableAccessor(1768): Updated table hbase:meta state to ENABLED in META 2016-12-02 15:29:02,224 INFO [10.10.9.179:52448.activeMasterManager] master.ServerManager(681): AssignmentManager hasn't finished failover cleanup; waiting 2016-12-02 15:29:02,225 INFO [10.10.9.179:52448.activeMasterManager] master.MasterMetaBootstrap(217): hbase:meta with replicaId 0 assigned=1, location=10.10.9.179,52448,1480721340079 2016-12-02 15:29:02,238 INFO [10.10.9.179:52448.activeMasterManager] master.AssignmentManager(580): Clean cluster startup. Don't reassign user regions 2016-12-02 15:29:02,238 INFO [10.10.9.179:52448.activeMasterManager] master.AssignmentManager(450): Joined the cluster in 11ms, failover=false 2016-12-02 15:29:02,264 INFO [10.10.9.179:52448.activeMasterManager] master.TableNamespaceManager(91): Namespace table not found. Creating... 2016-12-02 15:29:02,267 INFO [10.10.9.179:52448.activeMasterManager] master.HMaster(1617): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2016-12-02 15:29:02,438 DEBUG [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu.hfs.9 state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-12-02 15:29:02,464 DEBUG [ProcedureExecutorWorker-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/hbase:namespace/write-master:524480000000000 2016-12-02 15:29:02,478 WARN [M:0;10.10.9.179:52448] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,478 INFO [M:0;10.10.9.179:52448] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52448%2C1480721340079, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,513 INFO [M:0;10.10.9.179:52448] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079/10.10.9.179%2C52448%2C1480721340079.1480721342478 2016-12-02 15:29:02,513 DEBUG [M:0;10.10.9.179:52448] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52432,DS-7b1dc621-03e2-4208-a089-45882edf6203,DISK], DatanodeInfoWithStorage[127.0.0.1:52407,DS-378ff72f-b0d6-4d05-815b-ae7795fe2171,DISK], DatanodeInfoWithStorage[127.0.0.1:52412,DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3,DISK]] 2016-12-02 15:29:02,610 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW]]} size 0 2016-12-02 15:29:02,610 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|FINALIZED]]} size 0 2016-12-02 15:29:02,615 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|FINALIZED], ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|FINALIZED]]} size 0 2016-12-02 15:29:02,618 DEBUG [ProcedureExecutorWorker-1] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:02,622 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6406): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/.tmp Table name == hbase:namespace 2016-12-02 15:29:02,653 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW]]} size 0 2016-12-02 15:29:02,656 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW]]} size 0 2016-12-02 15:29:02,657 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW]]} size 0 2016-12-02 15:29:02,659 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(743): Instantiated hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:02,660 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1486): Closing hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02.: disabling compactions & flushes 2016-12-02 15:29:02,660 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1525): Updates disabled for region hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:02,660 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1643): Closed hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:02,783 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1417): Put{"totalColumns":1,"row":"hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:02,788 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1614): Added 1 2016-12-02 15:29:02,817 WARN [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,818 INFO [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52485%2C1480721340604, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52485,1480721340604, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 WARN [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 WARN [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 WARN [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 INFO [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52467%2C1480721340421, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52467,1480721340421, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 INFO [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52450%2C1480721340274, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52450,1480721340274, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 WARN [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 WARN [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 WARN [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,834 WARN [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,835 INFO [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52476%2C1480721340506, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52476,1480721340506, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,835 INFO [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52464%2C1480721340388, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52464,1480721340388, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 INFO [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52482%2C1480721340569, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52482,1480721340569, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 WARN [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,835 INFO [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52454%2C1480721340310, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52454,1480721340310, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,834 INFO [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52479%2C1480721340539, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52479,1480721340539, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,835 INFO [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52460%2C1480721340350, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52460,1480721340350, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,867 DEBUG [RS:9;10.10.9.179:52485] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52485%2C1480721340604 for peer 1 2016-12-02 15:29:02,872 INFO [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52485,1480721340604/10.10.9.179%2C52485%2C1480721340604.1480721342818 2016-12-02 15:29:02,872 WARN [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:02,872 INFO [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52473%2C1480721340476, suffix=, logDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52473,1480721340476, archiveDir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:02,888 DEBUG [RS:9;10.10.9.179:52485] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52485%2C1480721340604 2016-12-02 15:29:02,891 DEBUG [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52428,DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1,DISK], DatanodeInfoWithStorage[127.0.0.1:52412,DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3,DISK], DatanodeInfoWithStorage[127.0.0.1:52436,DS-9ec9db34-4e19-4191-b144-62275f2077e0,DISK]] 2016-12-02 15:29:02,898 DEBUG [RS:3;10.10.9.179:52464] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52464%2C1480721340388 for peer 1 2016-12-02 15:29:02,898 INFO [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52464,1480721340388/10.10.9.179%2C52464%2C1480721340388.1480721342835 2016-12-02 15:29:02,898 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721342898}]}} 2016-12-02 15:29:02,898 DEBUG [RS:3;10.10.9.179:52464] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52464%2C1480721340388 2016-12-02 15:29:02,899 DEBUG [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52412,DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3,DISK], DatanodeInfoWithStorage[127.0.0.1:52420,DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083,DISK], DatanodeInfoWithStorage[127.0.0.1:52440,DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897,DISK]] 2016-12-02 15:29:02,914 DEBUG [RS:7;10.10.9.179:52479] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52479%2C1480721340539 for peer 1 2016-12-02 15:29:02,914 INFO [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52479,1480721340539/10.10.9.179%2C52479%2C1480721340539.1480721342836 2016-12-02 15:29:02,916 DEBUG [RS:7;10.10.9.179:52479] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52479%2C1480721340539 2016-12-02 15:29:02,916 DEBUG [RS:0;10.10.9.179:52450] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52450%2C1480721340274 for peer 1 2016-12-02 15:29:02,916 DEBUG [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52420,DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083,DISK], DatanodeInfoWithStorage[127.0.0.1:52432,DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7,DISK], DatanodeInfoWithStorage[127.0.0.1:52407,DS-370520ed-6fc4-4604-b142-a5d4284a311c,DISK]] 2016-12-02 15:29:02,917 INFO [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52450,1480721340274/10.10.9.179%2C52450%2C1480721340274.1480721342834 2016-12-02 15:29:02,923 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1768): Updated table hbase:namespace state to ENABLING in META 2016-12-02 15:29:02,923 DEBUG [RS:2;10.10.9.179:52460] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52460%2C1480721340350 for peer 1 2016-12-02 15:29:02,925 INFO [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52460,1480721340350/10.10.9.179%2C52460%2C1480721340350.1480721342836 2016-12-02 15:29:02,925 INFO [ProcedureExecutorWorker-1] master.AssignmentManager(751): Assigning 1 region(s) to 10.10.9.179,52448,1480721340079 2016-12-02 15:29:02,925 DEBUG [RS:8;10.10.9.179:52482] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52482%2C1480721340569 for peer 1 2016-12-02 15:29:02,923 DEBUG [RS:1;10.10.9.179:52454] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52454%2C1480721340310 for peer 1 2016-12-02 15:29:02,927 INFO [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52454,1480721340310/10.10.9.179%2C52454%2C1480721340310.1480721342835 2016-12-02 15:29:02,927 DEBUG [RS:1;10.10.9.179:52454] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52454%2C1480721340310 2016-12-02 15:29:02,920 DEBUG [RS:6;10.10.9.179:52476] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52476%2C1480721340506 for peer 1 2016-12-02 15:29:02,927 INFO [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52476,1480721340506/10.10.9.179%2C52476%2C1480721340506.1480721342835 2016-12-02 15:29:02,927 DEBUG [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52424,DS-7977f021-6728-4c15-9596-ae0129596140,DISK], DatanodeInfoWithStorage[127.0.0.1:52403,DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a,DISK], DatanodeInfoWithStorage[127.0.0.1:52436,DS-9ec9db34-4e19-4191-b144-62275f2077e0,DISK]] 2016-12-02 15:29:02,919 DEBUG [RS:4;10.10.9.179:52467] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52467%2C1480721340421 for peer 1 2016-12-02 15:29:02,927 INFO [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52467,1480721340421/10.10.9.179%2C52467%2C1480721340421.1480721342834 2016-12-02 15:29:02,928 DEBUG [RS:4;10.10.9.179:52467] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52467%2C1480721340421 2016-12-02 15:29:02,927 DEBUG [RS:6;10.10.9.179:52476] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52476%2C1480721340506 2016-12-02 15:29:02,926 INFO [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52482,1480721340569/10.10.9.179%2C52482%2C1480721340569.1480721342835 2016-12-02 15:29:02,925 DEBUG [RS:2;10.10.9.179:52460] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52460%2C1480721340350 2016-12-02 15:29:02,924 DEBUG [RS:0;10.10.9.179:52450] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52450%2C1480721340274 2016-12-02 15:29:02,929 DEBUG [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52440,DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897,DISK], DatanodeInfoWithStorage[127.0.0.1:52428,DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1,DISK], DatanodeInfoWithStorage[127.0.0.1:52436,DS-9ec9db34-4e19-4191-b144-62275f2077e0,DISK]] 2016-12-02 15:29:02,929 DEBUG [RS:8;10.10.9.179:52482] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52482%2C1480721340569 2016-12-02 15:29:02,929 DEBUG [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52428,DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1,DISK], DatanodeInfoWithStorage[127.0.0.1:52412,DS-09afa37d-7680-43c2-9a55-48fdc90bdca3,DISK], DatanodeInfoWithStorage[127.0.0.1:52403,DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a,DISK]] 2016-12-02 15:29:02,929 DEBUG [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52407,DS-370520ed-6fc4-4604-b142-a5d4284a311c,DISK], DatanodeInfoWithStorage[127.0.0.1:52416,DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270,DISK], DatanodeInfoWithStorage[127.0.0.1:52428,DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1,DISK]] 2016-12-02 15:29:02,930 DEBUG [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52432,DS-7b1dc621-03e2-4208-a089-45882edf6203,DISK], DatanodeInfoWithStorage[127.0.0.1:52403,DS-228e15b4-22f0-4d12-a083-5b9d180b1d06,DISK], DatanodeInfoWithStorage[127.0.0.1:52420,DS-7ccb6605-886f-405c-ad06-219ad508d964,DISK]] 2016-12-02 15:29:02,929 INFO [ProcedureExecutorWorker-1] master.RegionStates(1139): Transition {5450cdacaee02275eb0f7d3bc71c5f02 state=OFFLINE, ts=1480721342925, server=null} to {5450cdacaee02275eb0f7d3bc71c5f02 state=PENDING_OPEN, ts=1480721342929, server=10.10.9.179,52448,1480721340079} 2016-12-02 15:29:02,930 DEBUG [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52407,DS-378ff72f-b0d6-4d05-815b-ae7795fe2171,DISK], DatanodeInfoWithStorage[127.0.0.1:52428,DS-acaec845-0744-4b60-8e8f-289bfadf69f9,DISK], DatanodeInfoWithStorage[127.0.0.1:52436,DS-cc7f8b08-497e-4564-9350-b8bad4875d61,DISK]] 2016-12-02 15:29:02,930 INFO [ProcedureExecutorWorker-1] master.RegionStateStore(208): Updating hbase:meta row hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. with state=PENDING_OPEN, sn=10.10.9.179,52448,1480721340079 2016-12-02 15:29:02,946 DEBUG [RS:5;10.10.9.179:52473] regionserver.ReplicationSourceManager(422): Start tracking logs for wal group 10.10.9.179%2C52473%2C1480721340476 for peer 1 2016-12-02 15:29:02,946 INFO [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52473,1480721340476/10.10.9.179%2C52473%2C1480721340476.1480721342872 2016-12-02 15:29:02,954 DEBUG [RS:5;10.10.9.179:52473] regionserver.ReplicationSource(225): Starting up worker for wal group 10.10.9.179%2C52473%2C1480721340476 2016-12-02 15:29:02,956 DEBUG [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52412,DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3,DISK], DatanodeInfoWithStorage[127.0.0.1:52428,DS-acaec845-0744-4b60-8e8f-289bfadf69f9,DISK], DatanodeInfoWithStorage[127.0.0.1:52420,DS-7ccb6605-886f-405c-ad06-219ad508d964,DISK]] 2016-12-02 15:29:02,973 INFO [ProcedureExecutorWorker-1] regionserver.RSRpcServices(1772): Open hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:02,977 DEBUG [ProcedureExecutorWorker-1] master.AssignmentManager(922): Bulk assigning done for 10.10.9.179,52448,1480721340079 2016-12-02 15:29:02,978 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] regionserver.HRegion(6583): Opening region: {ENCODED => 5450cdacaee02275eb0f7d3bc71c5f02, NAME => 'hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:02,978 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721342978}]}} 2016-12-02 15:29:02,980 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table namespace 5450cdacaee02275eb0f7d3bc71c5f02 2016-12-02 15:29:02,981 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] regionserver.HRegion(743): Instantiated hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:02,985 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1768): Updated table hbase:namespace state to ENABLED in META 2016-12-02 15:29:02,987 INFO [StoreOpener-5450cdacaee02275eb0f7d3bc71c5f02-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:02,987 INFO [StoreOpener-5450cdacaee02275eb0f7d3bc71c5f02-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:02,987 INFO [StoreOpener-5450cdacaee02275eb0f7d3bc71c5f02-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:02,990 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02 2016-12-02 15:29:02,999 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:02,999 INFO [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] regionserver.HRegion(893): Onlined 5450cdacaee02275eb0f7d3bc71c5f02; next sequenceid=2 2016-12-02 15:29:03,004 INFO [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] regionserver.HRegionServer(1995): Post open deploy tasks for hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:03,005 DEBUG [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] master.AssignmentManager(2949): Got transition OPENED for {5450cdacaee02275eb0f7d3bc71c5f02 state=PENDING_OPEN, ts=1480721342929, server=10.10.9.179,52448,1480721340079} from 10.10.9.179,52448,1480721340079 2016-12-02 15:29:03,005 INFO [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] master.RegionStates(1139): Transition {5450cdacaee02275eb0f7d3bc71c5f02 state=PENDING_OPEN, ts=1480721342929, server=10.10.9.179,52448,1480721340079} to {5450cdacaee02275eb0f7d3bc71c5f02 state=OPEN, ts=1480721343005, server=10.10.9.179,52448,1480721340079} 2016-12-02 15:29:03,005 INFO [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] master.RegionStateStore(208): Updating hbase:meta row hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. with state=OPEN, openSeqNum=2, server=10.10.9.179,52448,1480721340079 2016-12-02 15:29:03,009 DEBUG [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] master.RegionStates(466): Onlined 5450cdacaee02275eb0f7d3bc71c5f02 on 10.10.9.179,52448,1480721340079 2016-12-02 15:29:03,010 DEBUG [PostOpenDeployTasks:5450cdacaee02275eb0f7d3bc71c5f02] regionserver.HRegionServer(2022): Finished post open deploy task for hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:03,010 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52448-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. on 10.10.9.179,52448,1480721340079 2016-12-02 15:29:03,061 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/namespace 2016-12-02 15:29:03,062 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/namespace 2016-12-02 15:29:03,206 DEBUG [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateNamespaceProcedure (namespace=default) id=2 owner=tyu.hfs.9 state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-12-02 15:29:03,323 DEBUG [ProcedureExecutorWorker-1] lock.ZKInterProcessLockBase(328): Released /1/table-lock/hbase:namespace/write-master:524480000000000 2016-12-02 15:29:03,323 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor(987): Procedure completed in 935msec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu.hfs.9 state=FINISHED 2016-12-02 15:29:03,671 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-12-02 15:29:03,674 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-12-02 15:29:03,881 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor(987): Procedure completed in 694msec: CreateNamespaceProcedure (namespace=default) id=2 owner=tyu.hfs.9 state=FINISHED 2016-12-02 15:29:03,996 DEBUG [10.10.9.179:52448.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateNamespaceProcedure (namespace=hbase) id=3 owner=tyu.hfs.9 state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-12-02 15:29:04,320 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/namespace 2016-12-02 15:29:04,321 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-12-02 15:29:04,321 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-12-02 15:29:04,532 DEBUG [ProcedureExecutorWorker-3] procedure2.ProcedureExecutor(987): Procedure completed in 536msec: CreateNamespaceProcedure (namespace=hbase) id=3 owner=tyu.hfs.9 state=FINISHED 2016-12-02 15:29:04,547 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /1/namespace/default already exists 2016-12-02 15:29:04,548 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/default 2016-12-02 15:29:04,548 DEBUG [10.10.9.179:52448.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /1/namespace/hbase already exists 2016-12-02 15:29:04,549 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/namespace/hbase 2016-12-02 15:29:04,549 INFO [10.10.9.179:52448.activeMasterManager] master.HMaster(820): Master has completed initialization 2016-12-02 15:29:04,551 INFO [10.10.9.179:52448.activeMasterManager] quotas.MasterQuotaManager(71): Quota support disabled 2016-12-02 15:29:04,552 INFO [10.10.9.179:52448.activeMasterManager] zookeeper.ZooKeeperWatcher(195): not a secure deployment, proceeding 2016-12-02 15:29:04,562 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3dd818e8 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:04,564 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x3dd818e80x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:04,564 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x3dd818e8-0x158c1de825b003b connected 2016-12-02 15:29:04,565 DEBUG [main] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41b1f51e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:04,568 DEBUG [hconnection-0x3dd818e8-shared-pool43-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:04,568 DEBUG [hconnection-0x3dd818e8-shared-pool43-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:04,572 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:52744; connections=11, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:04,572 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:04,573 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 52744 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:04,592 INFO [main] hbase.HBaseTestingUtility(1113): Minicluster is up 2016-12-02 15:29:04,592 INFO [main] hbase.HBaseTestingUtility(1033): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2016-12-02 15:29:04,592 INFO [main] hbase.HBaseTestingUtility(448): System.getProperty("hadoop.log.dir") already set to: /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_logs so I do NOT create it in target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304 2016-12-02 15:29:04,592 WARN [main] hbase.HBaseTestingUtility(452): hadoop.log.dir property value differs in configuration and system: Configuration=/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop-log-dir while System=/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_logs Erasing configuration value by system value. 2016-12-02 15:29:04,592 INFO [main] hbase.HBaseTestingUtility(448): System.getProperty("hadoop.tmp.dir") already set to: /Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_tmp so I do NOT create it in target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304 2016-12-02 15:29:04,592 WARN [main] hbase.HBaseTestingUtility(452): hadoop.tmp.dir property value differs in configuration and system: Configuration=/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop-tmp-dir while System=/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/hadoop_tmp Erasing configuration value by system value. 2016-12-02 15:29:04,592 INFO [main] hbase.HBaseTestingUtility(516): Created new mini-cluster data directory: /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/dfscluster_2ad9746f-3cc9-4584-80af-0ebe43f401db, deleteOnExit=true 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(763): Setting test.cache.data to /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/cache_data in system properties and HBase conf 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(763): Setting hadoop.tmp.dir to /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/hadoop_tmp in system properties and HBase conf 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(763): Setting hadoop.log.dir to /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/hadoop_logs in system properties and HBase conf 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(763): Setting mapreduce.cluster.local.dir to /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/mapred_local in system properties and HBase conf 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(763): Setting mapreduce.cluster.temp.dir to /Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/mapred_temp in system properties and HBase conf 2016-12-02 15:29:04,593 INFO [main] hbase.HBaseTestingUtility(754): read short circuit is OFF 2016-12-02 15:29:04,594 DEBUG [main] fs.HFileSystem(244): The file system is not a DistributedFileSystem. Skipping on block location reordering Formatting using clusterid: testClusterID 2016-12-02 15:29:04,650 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:29:04,654 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/hdfs to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52755_hdfs____pdz6kk/webapp 2016-12-02 15:29:04,735 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52755 2016-12-02 15:29:05,117 INFO [10.10.9.179,52482,1480721340569_ChoreService_1] hbase.ScheduledChore(179): Chore: CompactionChecker missed its start time 2016-12-02 15:29:05,117 INFO [10.10.9.179,52482,1480721340569_ChoreService_2] hbase.ScheduledChore(179): Chore: 10.10.9.179,52482,1480721340569-MemstoreFlusherChore missed its start time 2016-12-02 15:29:05,124 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2016-12-02 15:29:05,128 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/tyu/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.1/hadoop-hdfs-2.7.1-tests.jar!/webapps/datanode to /var/folders/4g/2vdss5497xx9blpn2pbqc38r0000gn/T/Jetty_localhost_52800_datanode____4zyew2/webapp 2016-12-02 15:29:05,193 INFO [main] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52800 2016-12-02 15:29:05,424 INFO [IPC Server handler 5 on 52767] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-a21ac871-28b7-41f1-89ea-18b8ea95e060 node DatanodeRegistration(127.0.0.1:52790, datanodeUuid=ab4dfd0f-70a9-4b5c-afb6-11b4a7964d13, infoPort=52803, infoSecurePort=0, ipcPort=52804, storageInfo=lv=-56;cid=testClusterID;nsid=2124654329;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs 2016-12-02 15:29:05,424 INFO [IPC Server handler 5 on 52767] blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc node DatanodeRegistration(127.0.0.1:52790, datanodeUuid=ab4dfd0f-70a9-4b5c-afb6-11b4a7964d13, infoPort=52803, infoSecurePort=0, ipcPort=52804, storageInfo=lv=-56;cid=testClusterID;nsid=2124654329;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs 2016-12-02 15:29:05,461 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:05,462 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:05,499 INFO [IPC Server handler 3 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 7 2016-12-02 15:29:05,906 INFO [main] util.FSUtils(760): Created version file at hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547 with version=8 2016-12-02 15:29:05,906 INFO [main] hbase.HBaseTestingUtility(1283): The hbase.fs.tmp.dir is set to /user/tyu/hbase-staging 2016-12-02 15:29:05,908 INFO [main] client.ConnectionUtils(128): master//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:05,908 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:05,908 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:05,908 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:05,909 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:05,910 INFO [main] ipc.RpcServer$Listener(801): master//10.10.9.179:0: started 3 reader(s) listening on port=52887 2016-12-02 15:29:05,911 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:05,912 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:05,912 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:05,913 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=master:52887 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:05,916 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:528870x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:05,916 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): master:52887-0x158c1de825b003c connected 2016-12-02 15:29:05,917 DEBUG [main] zookeeper.RecoverableZooKeeper(584): Node /2 already exists 2016-12-02 15:29:05,919 DEBUG [main] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-12-02 15:29:05,919 DEBUG [main] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-12-02 15:29:05,920 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:05,921 INFO [RpcServer.listener,port=52887] ipc.RpcServer$Listener(882): RpcServer.listener,port=52887: starting 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52887 2016-12-02 15:29:05,921 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52887 2016-12-02 15:29:05,922 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887 2016-12-02 15:29:05,923 INFO [main] master.HMaster(416): hbase.rootdir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547, hbase.cluster.distributed=false 2016-12-02 15:29:05,925 INFO [main] master.HMaster(1840): Adding backup master ZNode /2/backup-masters/10.10.9.179,52887,1480721345911 2016-12-02 15:29:05,926 DEBUG [main] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/backup-masters/10.10.9.179,52887,1480721345911 2016-12-02 15:29:05,926 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/master 2016-12-02 15:29:05,928 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/master 2016-12-02 15:29:05,930 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/master 2016-12-02 15:29:05,930 INFO [10.10.9.179:52887.activeMasterManager] master.ActiveMasterManager(171): Deleting ZNode for /2/backup-masters/10.10.9.179,52887,1480721345911 from backup master directory 2016-12-02 15:29:05,930 DEBUG [main-EventThread] master.ActiveMasterManager(127): A master is now available 2016-12-02 15:29:05,930 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/backup-masters/10.10.9.179,52887,1480721345911 2016-12-02 15:29:05,930 WARN [10.10.9.179:52887.activeMasterManager] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:05,931 INFO [10.10.9.179:52887.activeMasterManager] master.ActiveMasterManager(180): Registered Active Master=10.10.9.179,52887,1480721345911 2016-12-02 15:29:05,949 INFO [IPC Server handler 7 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741826_1002{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 42 2016-12-02 15:29:05,950 INFO [main] client.ConnectionUtils(128): regionserver//10.10.9.179:0 server-side Connection retries=350 2016-12-02 15:29:05,950 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=5 2016-12-02 15:29:05,950 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=6 2016-12-02 15:29:05,950 INFO [main] ipc.RpcExecutor(145): RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=50; handlerCount=3 2016-12-02 15:29:05,950 INFO [main] io.ByteBufferPool(83): Created ByteBufferPool with bufferSize : 65536 and maxPoolSize : 320 2016-12-02 15:29:05,951 INFO [main] ipc.RpcServer$Listener(801): regionserver//10.10.9.179:0: started 3 reader(s) listening on port=52893 2016-12-02 15:29:05,953 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:05,953 INFO [main] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:05,954 INFO [main] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:05,954 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:52893 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:05,956 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:528930x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:05,956 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): regionserver:52893-0x158c1de825b003d connected 2016-12-02 15:29:05,956 DEBUG [main] zookeeper.ZKUtil(363): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/master 2016-12-02 15:29:05,957 DEBUG [main] zookeeper.ZKUtil(365): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-12-02 15:29:05,960 DEBUG [RpcServer.responder] ipc.RpcServer$Responder(1044): RpcServer.responder: starting 2016-12-02 15:29:05,960 INFO [RpcServer.listener,port=52893] ipc.RpcServer$Listener(882): RpcServer.listener,port=52893: starting 2016-12-02 15:29:05,960 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52893 2016-12-02 15:29:05,961 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52893 2016-12-02 15:29:05,961 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52893 2016-12-02 15:29:05,961 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52893 2016-12-02 15:29:05,961 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52893 2016-12-02 15:29:05,961 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=0,queue=0,port=52893 2016-12-02 15:29:05,962 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=1,queue=0,port=52893 2016-12-02 15:29:05,962 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=2,queue=0,port=52893 2016-12-02 15:29:05,962 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=3,queue=0,port=52893 2016-12-02 15:29:05,962 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=4,queue=0,port=52893 2016-12-02 15:29:05,962 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52893 2016-12-02 15:29:05,963 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=52893 2016-12-02 15:29:05,963 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=52893 2016-12-02 15:29:05,963 DEBUG [main] ipc.RpcExecutor(215): Started RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893 2016-12-02 15:29:05,969 INFO [M:0;10.10.9.179:52887] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x12c76763 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:05,969 INFO [RS:0;10.10.9.179:52893] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x781c87eb connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:05,971 DEBUG [M:0;10.10.9.179:52887-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x12c767630x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:05,971 DEBUG [M:0;10.10.9.179:52887-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x12c76763-0x158c1de825b003e connected 2016-12-02 15:29:05,971 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x781c87eb0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:05,971 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x781c87eb-0x158c1de825b003f connected 2016-12-02 15:29:05,971 INFO [M:0;10.10.9.179:52887] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:05,972 INFO [RS:0;10.10.9.179:52893] client.ZooKeeperRegistry(105): ClusterId read in ZooKeeper is null 2016-12-02 15:29:05,972 DEBUG [RS:0;10.10.9.179:52893] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:05,972 DEBUG [M:0;10.10.9.179:52887] client.ConnectionImplementation(462): clusterid came back null, using default default-cluster 2016-12-02 15:29:05,972 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72e4ec9b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:05,972 DEBUG [M:0;10.10.9.179:52887] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@183f0c13, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:06,354 DEBUG [10.10.9.179:52887.activeMasterManager] util.FSUtils(912): Created cluster ID file at hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/hbase.id with ID: 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:06,358 INFO [10.10.9.179:52887.activeMasterManager] master.MasterFileSystem(348): BOOTSTRAP: creating hbase:meta region 2016-12-02 15:29:06,358 INFO [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(6406): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'false', BLOCKSIZE => '8192'}, {NAME => 'rep_barrier', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'rep_meta', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'rep_position', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'}, {NAME => 'table', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547 Table name == hbase:meta 2016-12-02 15:29:06,369 INFO [IPC Server handler 1 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741827_1003{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 32 2016-12-02 15:29:06,773 DEBUG [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(743): Instantiated hbase:meta,,1.1588230740 2016-12-02 15:29:06,779 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:06,779 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:06,779 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:06,781 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:06,781 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:06,781 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:06,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:06,784 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_meta: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:06,784 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:06,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:06,787 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_position: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:06,787 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:06,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:06,793 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:06,795 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:06,797 DEBUG [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740 2016-12-02 15:29:06,799 DEBUG [10.10.9.179:52887.activeMasterManager] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (26843545) instead 2016-12-02 15:29:06,802 DEBUG [10.10.9.179:52887.activeMasterManager] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:06,802 INFO [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(893): Onlined 1588230740; next sequenceid=2 2016-12-02 15:29:06,802 DEBUG [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(1486): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-12-02 15:29:06,802 DEBUG [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(1525): Updates disabled for region hbase:meta,,1.1588230740 2016-12-02 15:29:06,802 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:06,802 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_barrier 2016-12-02 15:29:06,802 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_meta 2016-12-02 15:29:06,802 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_position 2016-12-02 15:29:06,802 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed table 2016-12-02 15:29:06,803 INFO [10.10.9.179:52887.activeMasterManager] regionserver.HRegion(1643): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:06,813 INFO [IPC Server handler 1 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741828_1004{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 1653 2016-12-02 15:29:07,219 DEBUG [10.10.9.179:52887.activeMasterManager] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:07,242 INFO [10.10.9.179:52887.activeMasterManager] fs.HFileSystem(275): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2016-12-02 15:29:07,242 INFO [10.10.9.179:52887.activeMasterManager] coordination.ZKSplitLogManagerCoordination(586): Found 0 orphan tasks and 0 rescan nodes 2016-12-02 15:29:07,243 DEBUG [10.10.9.179:52887.activeMasterManager] util.FSTableDescriptors(283): Fetching table descriptors from the filesystem. 2016-12-02 15:29:07,250 INFO [10.10.9.179:52887.activeMasterManager] balancer.StochasticLoadBalancer(160): loading config 2016-12-02 15:29:07,250 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/balancer 2016-12-02 15:29:07,250 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/normalizer 2016-12-02 15:29:07,251 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/split 2016-12-02 15:29:07,251 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/switch/merge 2016-12-02 15:29:07,252 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-12-02 15:29:07,252 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2016-12-02 15:29:07,253 INFO [10.10.9.179:52887.activeMasterManager] master.HMaster(646): Server active/primary master=10.10.9.179,52887,1480721345911, sessionid=0x158c1de825b003c, setting cluster-up flag (Was=false) 2016-12-02 15:29:07,258 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(832): ClusterId : 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:07,258 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(832): ClusterId : 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:07,258 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:07,258 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is initializing 2016-12-02 15:29:07,259 DEBUG [RS:0;10.10.9.179:52893] zookeeper.RecoverableZooKeeper(584): Node /2/flush-table-proc/acquired already exists 2016-12-02 15:29:07,259 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /2/flush-table-proc/acquired already exists 2016-12-02 15:29:07,260 DEBUG [M:0;10.10.9.179:52887] zookeeper.RecoverableZooKeeper(584): Node /2/flush-table-proc/abort already exists 2016-12-02 15:29:07,260 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:07,260 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:07,260 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /2/flush-table-proc/abort already exists 2016-12-02 15:29:07,260 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is initialized 2016-12-02 15:29:07,260 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is initializing 2016-12-02 15:29:07,260 INFO [10.10.9.179:52887.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/flush-table-proc/acquired /2/flush-table-proc/reached /2/flush-table-proc/abort 2016-12-02 15:29:07,261 DEBUG [M:0;10.10.9.179:52887] zookeeper.RecoverableZooKeeper(584): Node /2/online-snapshot already exists 2016-12-02 15:29:07,261 DEBUG [10.10.9.179:52887.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(246): Starting the controller for procedure member:10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,261 DEBUG [M:0;10.10.9.179:52887] zookeeper.RecoverableZooKeeper(584): Node /2/online-snapshot/acquired already exists 2016-12-02 15:29:07,262 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /2/online-snapshot/acquired already exists 2016-12-02 15:29:07,262 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:07,262 DEBUG [M:0;10.10.9.179:52887] zookeeper.RecoverableZooKeeper(584): Node /2/online-snapshot/abort already exists 2016-12-02 15:29:07,262 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is initialized 2016-12-02 15:29:07,263 INFO [RS:0;10.10.9.179:52893] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:07,263 INFO [10.10.9.179:52887.activeMasterManager] procedure.ZKProcedureUtil(270): Clearing all procedure znodes: /2/online-snapshot/acquired /2/online-snapshot/reached /2/online-snapshot/abort 2016-12-02 15:29:07,263 INFO [M:0;10.10.9.179:52887] regionserver.MemStoreFlusher(135): globalMemStoreLimit=995.6 M, globalMemStoreLimitLowMark=945.8 M, Offheap=false 2016-12-02 15:29:07,263 INFO [RS:0;10.10.9.179:52893] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:07,263 INFO [M:0;10.10.9.179:52887] throttle.PressureAwareCompactionThroughputController(132): Compaction throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00 MB/sec, off peak: unlimited, tuning period: 60000 ms 2016-12-02 15:29:07,263 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:07,263 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer$CompactionChecker(1625): CompactionChecker runs every 0sec 2016-12-02 15:29:07,263 DEBUG [10.10.9.179:52887.activeMasterManager] procedure.ZKProcedureCoordinatorRpcs(246): Starting the controller for procedure member:10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,263 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61b4a2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:07,263 DEBUG [M:0;10.10.9.179:52887] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c2a9ede, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=10.10.9.179/10.10.9.179:0 2016-12-02 15:29:07,263 DEBUG [RS:0;10.10.9.179:52893] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;10.10.9.179:52893 2016-12-02 15:29:07,264 DEBUG [M:0;10.10.9.179:52887] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:M:0;10.10.9.179:52887 2016-12-02 15:29:07,264 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:07,264 DEBUG [RS:0;10.10.9.179:52893] zookeeper.ZKUtil(363): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:07,264 INFO [10.10.9.179:52887.activeMasterManager] master.MasterCoprocessorHost(101): System coprocessor loading is enabled 2016-12-02 15:29:07,264 DEBUG [M:0;10.10.9.179:52887] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,264 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-10.10.9.179:52887, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:07,264 INFO [RS:0;10.10.9.179:52893] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:07,265 INFO [RS:0;10.10.9.179:52893] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:07,265 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-10.10.9.179:52887, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:07,264 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,264 INFO [M:0;10.10.9.179:52887] regionserver.RegionServerCoprocessorHost(68): System coprocessor loading is enabled 2016-12-02 15:29:07,265 INFO [M:0;10.10.9.179:52887] regionserver.RegionServerCoprocessorHost(69): Table coprocessor loading is enabled 2016-12-02 15:29:07,265 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-10.10.9.179:52887, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:07,265 DEBUG [main-EventThread] zookeeper.ZKUtil(363): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:07,265 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52887,1480721345911 with port=52893, startcode=1480721345952 2016-12-02 15:29:07,265 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-10.10.9.179:52887, corePoolSize=5, maxPoolSize=5 2016-12-02 15:29:07,267 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52887,1480721345911 with port=52887, startcode=1480721345911 2016-12-02 15:29:07,267 DEBUG [RS:0;10.10.9.179:52893] ipc.RpcConnection(133): Use SIMPLE authentication for service RegionServerStatusService, sasl=false 2016-12-02 15:29:07,267 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-10.10.9.179:52887, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:07,267 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,267 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(2484): Master is not running yet 2016-12-02 15:29:07,267 WARN [M:0;10.10.9.179:52887] regionserver.HRegionServer(970): reportForDuty failed; sleeping and then retrying. 2016-12-02 15:29:07,267 DEBUG [10.10.9.179:52887.activeMasterManager] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-10.10.9.179:52887, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:07,267 DEBUG [RS:0;10.10.9.179:52893] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:07,269 DEBUG [main-EventThread] zookeeper.RegionServerTracker(93): Added tracking of RS /2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:07,269 INFO [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(487): Starting procedure executor threads=8 2016-12-02 15:29:07,269 INFO [10.10.9.179:52887.activeMasterManager] wal.WALProcedureStore(299): Starting WAL Procedure Store lease recovery 2016-12-02 15:29:07,278 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53037; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:07,279 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.10 (auth:SIMPLE) 2016-12-02 15:29:07,279 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53037 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:07,281 DEBUG [10.10.9.179:52887.activeMasterManager] wal.WALProcedureStore(922): Roll new state log: 1 2016-12-02 15:29:07,281 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] ipc.CallRunner(127): RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887: callId: 0 service: RegionServerStatusService methodName: RegionServerStartup size: 48 connection: 10.10.9.179:53037 deadline: 1480721357280 org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2427) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:262) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:10352) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:07,282 INFO [10.10.9.179:52887.activeMasterManager] wal.WALProcedureStore(328): Lease acquired for flushLogId: 1 2016-12-02 15:29:07,285 INFO [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(508): recover procedure store (WALProcedureStore) lease: 16msec 2016-12-02 15:29:07,285 DEBUG [10.10.9.179:52887.activeMasterManager] wal.WALProcedureStore(345): No state logs to replay. 2016-12-02 15:29:07,285 DEBUG [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor$1(283): load procedures maxProcId=0 2016-12-02 15:29:07,285 INFO [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(522): load procedure store (WALProcedureStore): 0msec 2016-12-02 15:29:07,285 DEBUG [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(526): start workers 8 2016-12-02 15:29:07,286 DEBUG [10.10.9.179:52887.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2016-12-02 15:29:07,286 INFO [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:07,288 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): replicationLogCleaner0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:07,288 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): replicationLogCleaner-0x158c1de825b0040 connected 2016-12-02 15:29:07,289 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(2484): Master is not running yet 2016-12-02 15:29:07,289 WARN [RS:0;10.10.9.179:52893] regionserver.HRegionServer(970): reportForDuty failed; sleeping and then retrying. 2016-12-02 15:29:07,289 DEBUG [10.10.9.179:52887.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2016-12-02 15:29:07,290 DEBUG [10.10.9.179:52887.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2016-12-02 15:29:07,290 DEBUG [10.10.9.179:52887.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2016-12-02 15:29:07,290 DEBUG [10.10.9.179:52887.activeMasterManager] cleaner.CleanerChore(99): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2016-12-02 15:29:07,290 INFO [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6b943405 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:07,292 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x6b9434050x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:07,292 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x6b943405-0x158c1de825b0041 connected 2016-12-02 15:29:07,292 DEBUG [10.10.9.179:52887.activeMasterManager] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e502055, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:07,292 INFO [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(120): Process identifier=ReplicationAdmin connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:07,294 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:07,294 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(529): ReplicationAdmin-0x158c1de825b0042 connected 2016-12-02 15:29:07,294 INFO [10.10.9.179:52887.activeMasterManager] master.ServerManager(1042): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2016-12-02 15:29:07,295 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52887,1480721345911 with port=52887, startcode=1480721345911 2016-12-02 15:29:07,296 INFO [M:0;10.10.9.179:52887] master.ServerManager(453): Registering server=10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,296 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547 2016-12-02 15:29:07,296 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52767 2016-12-02 15:29:07,296 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:07,296 WARN [M:0;10.10.9.179:52887] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:07,296 INFO [M:0;10.10.9.179:52887] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,296 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,297 INFO [M:0;10.10.9.179:52887] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:07,297 INFO [M:0;10.10.9.179:52887] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52887, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52887, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52887, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52887, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52887, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:07,298 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52887, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:07,299 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52887, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:07,299 DEBUG [M:0;10.10.9.179:52887] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52887, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:07,310 INFO [M:0;10.10.9.179:52887] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:07,310 INFO [SplitLogWorker-10.10.9.179:52887] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52887,1480721345911 starting 2016-12-02 15:29:07,310 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52887,1480721345911, RpcServer on 10.10.9.179/10.10.9.179:52887, sessionid=0x158c1de825b003c 2016-12-02 15:29:07,310 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:07,310 DEBUG [M:0;10.10.9.179:52887] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,310 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52887,1480721345911' 2016-12-02 15:29:07,310 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-12-02 15:29:07,310 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52887,1480721345911' 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-12-02 15:29:07,311 DEBUG [M:0;10.10.9.179:52887] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:07,311 INFO [M:0;10.10.9.179:52887] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:07,345 INFO [10.10.9.179:52887.activeMasterManager] master.ServerManager(1059): Finished waiting for region servers count to settle; checked in 1, slept for 51 ms, expecting minimum of 1, maximum of 1, master is running 2016-12-02 15:29:07,348 INFO [10.10.9.179:52887.activeMasterManager] master.ServerManager(453): Registering server=10.10.9.179,52893,1480721345952 2016-12-02 15:29:07,348 INFO [10.10.9.179:52887.activeMasterManager] master.HMaster(900): Registered server found up in zk but who has not yet reported in: 10.10.9.179,52893,1480721345952 2016-12-02 15:29:07,351 DEBUG [10.10.9.179:52887.activeMasterManager] master.MasterWalManager(174): No log files to split, proceeding... 2016-12-02 15:29:07,351 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(622): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-12-02 15:29:07,355 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(622): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Unable to get data of znode /2/meta-region-server because node does not exist (not an error) 2016-12-02 15:29:07,355 INFO [10.10.9.179:52887.activeMasterManager] master.MasterMetaBootstrap(188): Re-assigning hbase:meta with replicaId, 0 it was on null 2016-12-02 15:29:07,355 DEBUG [10.10.9.179:52887.activeMasterManager] master.AssignmentManager(1321): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=10.10.9.179,52887,1480721345911; 2 (online=2) available servers, forceNewPlan=false 2016-12-02 15:29:07,355 INFO [10.10.9.179:52887.activeMasterManager] master.AssignmentManager(1105): Assigning hbase:meta,,1.1588230740 to 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,355 INFO [10.10.9.179:52887.activeMasterManager] master.RegionStates(1139): Transition {1588230740 state=OFFLINE, ts=1480721347355, server=null} to {1588230740 state=PENDING_OPEN, ts=1480721347355, server=10.10.9.179,52887,1480721345911} 2016-12-02 15:29:07,356 INFO [10.10.9.179:52887.activeMasterManager] zookeeper.MetaTableLocator(442): Setting hbase:meta region location in ZooKeeper as 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,358 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.MetaTableLocator(454): META region location doesn't exist, create it 2016-12-02 15:29:07,358 DEBUG [10.10.9.179:52887.activeMasterManager] master.ServerManager(968): New admin connection to 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,359 INFO [10.10.9.179:52887.activeMasterManager] regionserver.RSRpcServices(1772): Open hbase:meta,,1.1588230740 2016-12-02 15:29:07,360 INFO [RS_OPEN_META-10.10.9.179:52887-0] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:07,361 DEBUG [10.10.9.179:52887.activeMasterManager] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721347359}]}} 2016-12-02 15:29:07,362 WARN [RS_OPEN_META-10.10.9.179:52887-0] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:07,363 INFO [RS_OPEN_META-10.10.9.179:52887-0] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52887%2C1480721345911.meta, suffix=.meta, logDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911, archiveDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:07,371 INFO [RS_OPEN_META-10.10.9.179:52887-0] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911/10.10.9.179%2C52887%2C1480721345911.meta.1480721347363.meta 2016-12-02 15:29:07,371 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52790,DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc,DISK]] 2016-12-02 15:29:07,372 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.HRegion(6583): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:07,372 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] coprocessor.CoprocessorHost(202): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2016-12-02 15:29:07,372 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.HRegion(7728): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2016-12-02 15:29:07,375 INFO [RS_OPEN_META-10.10.9.179:52887-0] regionserver.RegionCoprocessorHost(368): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2016-12-02 15:29:07,375 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table meta 1588230740 2016-12-02 15:29:07,375 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.HRegion(743): Instantiated hbase:meta,,1.1588230740 2016-12-02 15:29:07,377 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:07,377 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,377 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:07,378 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:07,378 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_barrier: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,378 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:07,379 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:07,379 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_meta: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,380 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:07,381 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:07,381 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for rep_position: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,381 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:07,382 INFO [StoreOpener-1588230740-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:07,382 INFO [StoreOpener-1588230740-1] hfile.CacheConfig(256): Created cacheConfig for table: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:07,382 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:07,385 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740 2016-12-02 15:29:07,387 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] regionserver.FlushLargeStoresPolicy(61): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in description of table hbase:meta, use config (26843545) instead 2016-12-02 15:29:07,389 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/recovered.edits/3.seqid to file, newSeqId=3, maxSeqId=2 2016-12-02 15:29:07,389 INFO [RS_OPEN_META-10.10.9.179:52887-0] regionserver.HRegion(893): Onlined 1588230740; next sequenceid=3 2016-12-02 15:29:07,393 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1995): Post open deploy tasks for hbase:meta,,1.1588230740 2016-12-02 15:29:07,393 DEBUG [PostOpenDeployTasks:1588230740] master.AssignmentManager(2949): Got transition OPENED for {1588230740 state=PENDING_OPEN, ts=1480721347355, server=10.10.9.179,52887,1480721345911} from 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,393 INFO [PostOpenDeployTasks:1588230740] master.RegionStates(1139): Transition {1588230740 state=PENDING_OPEN, ts=1480721347355, server=10.10.9.179,52887,1480721345911} to {1588230740 state=OPEN, ts=1480721347393, server=10.10.9.179,52887,1480721345911} 2016-12-02 15:29:07,393 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator(442): Setting hbase:meta region location in ZooKeeper as 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,394 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/meta-region-server 2016-12-02 15:29:07,394 DEBUG [PostOpenDeployTasks:1588230740] master.RegionStates(466): Onlined 1588230740 on 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,395 DEBUG [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(2022): Finished post open deploy task for hbase:meta,,1.1588230740 2016-12-02 15:29:07,396 DEBUG [RS_OPEN_META-10.10.9.179:52887-0] handler.OpenRegionHandler(126): Opened hbase:meta,,1.1588230740 on 10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,443 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-12-02 15:29:07,584 INFO [10.10.9.179:52887.activeMasterManager] hbase.MetaTableAccessor(1768): Updated table hbase:meta state to ENABLED in META 2016-12-02 15:29:07,585 DEBUG [10.10.9.179:52887.activeMasterManager] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721347585}]}} 2016-12-02 15:29:07,623 INFO [10.10.9.179:52887.activeMasterManager] hbase.MetaTableAccessor(1768): Updated table hbase:meta state to ENABLED in META 2016-12-02 15:29:07,625 INFO [10.10.9.179:52887.activeMasterManager] master.ServerManager(681): AssignmentManager hasn't finished failover cleanup; waiting 2016-12-02 15:29:07,626 INFO [10.10.9.179:52887.activeMasterManager] master.MasterMetaBootstrap(217): hbase:meta with replicaId 0 assigned=1, location=10.10.9.179,52887,1480721345911 2016-12-02 15:29:07,665 INFO [10.10.9.179:52887.activeMasterManager] master.AssignmentManager(580): Clean cluster startup. Don't reassign user regions 2016-12-02 15:29:07,665 INFO [10.10.9.179:52887.activeMasterManager] master.AssignmentManager(450): Joined the cluster in 39ms, failover=false 2016-12-02 15:29:07,666 INFO [10.10.9.179:52887.activeMasterManager] master.TableNamespaceManager(91): Namespace table not found. Creating... 2016-12-02 15:29:07,666 INFO [10.10.9.179:52887.activeMasterManager] master.HMaster(1617): Client=null/null create 'hbase:namespace', {NAME => 'info', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} 2016-12-02 15:29:07,794 DEBUG [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu.hfs.10 state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-12-02 15:29:07,800 DEBUG [ProcedureExecutorWorker-1] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/hbase:namespace/write-master:528870000000000 2016-12-02 15:29:07,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:07,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:07,918 INFO [IPC Server handler 7 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 0 2016-12-02 15:29:07,920 DEBUG [ProcedureExecutorWorker-1] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:07,921 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(6406): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', BLOCKCACHE => 'true', BLOCKSIZE => '8192'} RootDir = hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/.tmp Table name == hbase:namespace 2016-12-02 15:29:07,929 INFO [IPC Server handler 8 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 0 2016-12-02 15:29:07,931 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(743): Instantiated hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:07,931 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1486): Closing hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864.: disabling compactions & flushes 2016-12-02 15:29:07,931 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1525): Updates disabled for region hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:07,931 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1643): Closed hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:08,040 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1417): Put{"totalColumns":1,"row":"hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:08,044 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1614): Added 1 2016-12-02 15:29:08,148 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721348148}]}} 2016-12-02 15:29:08,160 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1768): Updated table hbase:namespace state to ENABLING in META 2016-12-02 15:29:08,161 INFO [ProcedureExecutorWorker-1] master.AssignmentManager(751): Assigning 1 region(s) to 10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,166 INFO [ProcedureExecutorWorker-1] master.RegionStates(1139): Transition {09b1ecd6eda75f10b347b13abc2f2864 state=OFFLINE, ts=1480721348161, server=null} to {09b1ecd6eda75f10b347b13abc2f2864 state=PENDING_OPEN, ts=1480721348166, server=10.10.9.179,52887,1480721345911} 2016-12-02 15:29:08,166 INFO [ProcedureExecutorWorker-1] master.RegionStateStore(208): Updating hbase:meta row hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. with state=PENDING_OPEN, sn=10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,172 INFO [ProcedureExecutorWorker-1] regionserver.RSRpcServices(1772): Open hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:08,173 DEBUG [ProcedureExecutorWorker-1] master.AssignmentManager(922): Bulk assigning done for 10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,176 DEBUG [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721348173}]}} 2016-12-02 15:29:08,178 WARN [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:08,178 INFO [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52887%2C1480721345911, suffix=, logDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911, archiveDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:08,179 INFO [ProcedureExecutorWorker-1] hbase.MetaTableAccessor(1768): Updated table hbase:namespace state to ENABLED in META 2016-12-02 15:29:08,185 INFO [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911/10.10.9.179%2C52887%2C1480721345911.1480721348179 2016-12-02 15:29:08,185 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52790,DS-a21ac871-28b7-41f1-89ea-18b8ea95e060,DISK]] 2016-12-02 15:29:08,186 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] regionserver.HRegion(6583): Opening region: {ENCODED => 09b1ecd6eda75f10b347b13abc2f2864, NAME => 'hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:08,186 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table namespace 09b1ecd6eda75f10b347b13abc2f2864 2016-12-02 15:29:08,188 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] regionserver.HRegion(743): Instantiated hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:08,190 INFO [StoreOpener-09b1ecd6eda75f10b347b13abc2f2864-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:08,190 INFO [StoreOpener-09b1ecd6eda75f10b347b13abc2f2864-1] hfile.CacheConfig(256): Created cacheConfig for info: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:08,190 INFO [StoreOpener-09b1ecd6eda75f10b347b13abc2f2864-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:08,193 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864 2016-12-02 15:29:08,196 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:08,196 INFO [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] regionserver.HRegion(893): Onlined 09b1ecd6eda75f10b347b13abc2f2864; next sequenceid=2 2016-12-02 15:29:08,199 INFO [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] regionserver.HRegionServer(1995): Post open deploy tasks for hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:08,200 DEBUG [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] master.AssignmentManager(2949): Got transition OPENED for {09b1ecd6eda75f10b347b13abc2f2864 state=PENDING_OPEN, ts=1480721348166, server=10.10.9.179,52887,1480721345911} from 10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,200 INFO [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] master.RegionStates(1139): Transition {09b1ecd6eda75f10b347b13abc2f2864 state=PENDING_OPEN, ts=1480721348166, server=10.10.9.179,52887,1480721345911} to {09b1ecd6eda75f10b347b13abc2f2864 state=OPEN, ts=1480721348200, server=10.10.9.179,52887,1480721345911} 2016-12-02 15:29:08,200 INFO [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] master.RegionStateStore(208): Updating hbase:meta row hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. with state=OPEN, openSeqNum=2, server=10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,207 DEBUG [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] master.RegionStates(466): Onlined 09b1ecd6eda75f10b347b13abc2f2864 on 10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,208 DEBUG [PostOpenDeployTasks:09b1ecd6eda75f10b347b13abc2f2864] regionserver.HRegionServer(2022): Finished post open deploy task for hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:08,208 DEBUG [RS_OPEN_PRIORITY_REGION-10.10.9.179:52887-0] handler.OpenRegionHandler(126): Opened hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. on 10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,294 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(2465): reportForDuty to master=10.10.9.179,52887,1480721345911 with port=52893, startcode=1480721345952 2016-12-02 15:29:08,295 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.ServerManager(453): Registering server=10.10.9.179,52893,1480721345952 2016-12-02 15:29:08,295 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1426): Config from master: hbase.rootdir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547 2016-12-02 15:29:08,296 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1426): Config from master: fs.defaultFS=hdfs://localhost:52767 2016-12-02 15:29:08,296 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1426): Config from master: hbase.master.info.port=-1 2016-12-02 15:29:08,296 WARN [RS:0;10.10.9.179:52893] hbase.ZNodeClearer(61): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2016-12-02 15:29:08,296 INFO [RS:0;10.10.9.179:52893] hfile.CacheConfig(281): Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:08,296 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1724): logdir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:08,302 INFO [RS:0;10.10.9.179:52893] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x484a949a connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:08,303 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x484a949a0x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:08,303 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x484a949a-0x158c1de825b0043 connected 2016-12-02 15:29:08,304 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46bb2671, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:08,304 DEBUG [RS:0;10.10.9.179:52893] regionserver.Replication(152): ReplicationStatisticsThread 300 2016-12-02 15:29:08,304 INFO [RS:0;10.10.9.179:52893] wal.WALFactory(141): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2016-12-02 15:29:08,304 INFO [RS:0;10.10.9.179:52893] regionserver.MetricsRegionServerWrapperImpl(140): Computing regionserver metrics every 5000 milliseconds 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-10.10.9.179:52893, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-10.10.9.179:52893, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-10.10.9.179:52893, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-10.10.9.179:52893, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-10.10.9.179:52893, corePoolSize=1, maxPoolSize=1 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-10.10.9.179:52893, corePoolSize=2, maxPoolSize=2 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-10.10.9.179:52893, corePoolSize=10, maxPoolSize=10 2016-12-02 15:29:08,305 DEBUG [RS:0;10.10.9.179:52893] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-10.10.9.179:52893, corePoolSize=3, maxPoolSize=3 2016-12-02 15:29:08,306 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:08,306 DEBUG [ReplicationExecutor-0] zookeeper.ZKUtil(363): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on existing znode=/2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:08,306 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$AdoptAbandonedQueuesWorker(765): Current list of replicators: [10.10.9.179,52893,1480721345952] other RSs: [10.10.9.179,52887,1480721345911, 10.10.9.179,52893,1480721345952] 2016-12-02 15:29:08,307 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/namespace 2016-12-02 15:29:08,308 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/namespace 2016-12-02 15:29:08,327 INFO [RS:0;10.10.9.179:52893] regionserver.HeapMemoryManager(198): Starting HeapMemoryTuner chore. 2016-12-02 15:29:08,327 INFO [SplitLogWorker-10.10.9.179:52893] regionserver.SplitLogWorker(134): SplitLogWorker 10.10.9.179,52893,1480721345952 starting 2016-12-02 15:29:08,328 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1459): Serving as 10.10.9.179,52893,1480721345952, RpcServer on 10.10.9.179/10.10.9.179:52893, sessionid=0x158c1de825b003d 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(52): Procedure flush-table-proc is starting 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] flush.RegionServerFlushTableProcedureManager(103): Start region server flush procedure manager 10.10.9.179,52893,1480721345952 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52893,1480721345952' 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/2/flush-table-proc/abort' 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/2/flush-table-proc/acquired' 2016-12-02 15:29:08,328 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(54): Procedure flush-table-proc is started 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(52): Procedure online-snapshot is starting 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] snapshot.RegionServerSnapshotManager(124): Start Snapshot Manager 10.10.9.179,52893,1480721345952 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(350): Starting procedure member '10.10.9.179,52893,1480721345952' 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] procedure.ZKProcedureMemberRpcs(150): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2016-12-02 15:29:08,329 DEBUG [RS:0;10.10.9.179:52893] procedure.RegionServerProcedureManagerHost(54): Procedure online-snapshot is started 2016-12-02 15:29:08,329 INFO [RS:0;10.10.9.179:52893] quotas.RegionServerQuotaManager(62): Quota support disabled 2016-12-02 15:29:08,332 INFO [RS:7;10.10.9.179:52479.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,332 DEBUG [RS:7;10.10.9.179:52479.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52479%2C1480721340539 2016-12-02 15:29:08,338 INFO [RS:8;10.10.9.179:52482.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:1;10.10.9.179:52454.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:2;10.10.9.179:52460.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:3;10.10.9.179:52464.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:0;10.10.9.179:52450.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:9;10.10.9.179:52485.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,339 DEBUG [RS:0;10.10.9.179:52450.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52450%2C1480721340274 2016-12-02 15:29:08,338 DEBUG [RS:3;10.10.9.179:52464.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52464%2C1480721340388 2016-12-02 15:29:08,338 DEBUG [RS:2;10.10.9.179:52460.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52460%2C1480721340350 2016-12-02 15:29:08,338 INFO [RS:6;10.10.9.179:52476.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,345 DEBUG [RS:6;10.10.9.179:52476.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52476%2C1480721340506 2016-12-02 15:29:08,338 INFO [RS:5;10.10.9.179:52473.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 INFO [RS:4;10.10.9.179:52467.replicationSource,1] regionserver.ReplicationSource(321): Replicating bec31e96-5e53-44a0-979b-2eef7e7b4feb -> 1e58dcac-ef5a-488d-9270-29a9b8c5923c 2016-12-02 15:29:08,338 DEBUG [RS:1;10.10.9.179:52454.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52454%2C1480721340310 2016-12-02 15:29:08,338 DEBUG [RS:8;10.10.9.179:52482.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52482%2C1480721340569 2016-12-02 15:29:08,346 DEBUG [RS:4;10.10.9.179:52467.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52467%2C1480721340421 2016-12-02 15:29:08,346 DEBUG [RS:5;10.10.9.179:52473.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52473%2C1480721340476 2016-12-02 15:29:08,339 DEBUG [RS:9;10.10.9.179:52485.replicationSource,1] regionserver.ReplicationSource(330): Someone has beat us to start a worker thread for wal group 10.10.9.179%2C52485%2C1480721340604 2016-12-02 15:29:08,387 DEBUG [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateNamespaceProcedure (namespace=default) id=2 owner=tyu.hfs.10 state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-12-02 15:29:08,493 DEBUG [ProcedureExecutorWorker-1] lock.ZKInterProcessLockBase(328): Released /2/table-lock/hbase:namespace/write-master:528870000000000 2016-12-02 15:29:08,493 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor(987): Procedure completed in 721msec: CreateTableProcedure (table=hbase:namespace) id=1 owner=tyu.hfs.10 state=FINISHED 2016-12-02 15:29:08,822 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-12-02 15:29:08,823 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-12-02 15:29:09,037 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor(987): Procedure completed in 617msec: CreateNamespaceProcedure (namespace=default) id=2 owner=tyu.hfs.10 state=FINISHED 2016-12-02 15:29:09,144 DEBUG [10.10.9.179:52887.activeMasterManager] procedure2.ProcedureExecutor(706): Procedure CreateNamespaceProcedure (namespace=hbase) id=3 owner=tyu.hfs.10 state=RUNNABLE:CREATE_NAMESPACE_PREPARE added to the store. 2016-12-02 15:29:09,336 WARN [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(392): 'hbase.regionserver.maxlogs' was deprecated. 2016-12-02 15:29:09,336 INFO [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(397): WAL configuration: blocksize=20 KB, rollsize=19 KB, prefix=10.10.9.179%2C52893%2C1480721345952, suffix=, logDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52893,1480721345952, archiveDir=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:09,350 INFO [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(671): New WAL /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52893,1480721345952/10.10.9.179%2C52893%2C1480721345952.1480721349337 2016-12-02 15:29:09,350 DEBUG [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(737): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:52790,DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc,DISK]] 2016-12-02 15:29:09,475 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/namespace 2016-12-02 15:29:09,476 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node default with data: \x0A\x07default 2016-12-02 15:29:09,476 DEBUG [main-EventThread] hbase.ZKNamespaceManager(201): Updating namespace cache from node hbase with data: \x0A\x05hbase 2016-12-02 15:29:09,685 DEBUG [ProcedureExecutorWorker-3] procedure2.ProcedureExecutor(987): Procedure completed in 541msec: CreateNamespaceProcedure (namespace=hbase) id=3 owner=tyu.hfs.10 state=FINISHED 2016-12-02 15:29:09,694 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /2/namespace/default already exists 2016-12-02 15:29:09,695 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/default 2016-12-02 15:29:09,696 DEBUG [10.10.9.179:52887.activeMasterManager] zookeeper.RecoverableZooKeeper(584): Node /2/namespace/hbase already exists 2016-12-02 15:29:09,696 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/namespace/hbase 2016-12-02 15:29:09,696 INFO [10.10.9.179:52887.activeMasterManager] master.HMaster(820): Master has completed initialization 2016-12-02 15:29:09,697 INFO [10.10.9.179:52887.activeMasterManager] quotas.MasterQuotaManager(71): Quota support disabled 2016-12-02 15:29:09,697 INFO [10.10.9.179:52887.activeMasterManager] zookeeper.ZooKeeperWatcher(195): not a secure deployment, proceeding 2016-12-02 15:29:09,759 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x63884e4 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:09,764 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x63884e40x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:09,764 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x63884e4-0x158c1de825b0044 connected 2016-12-02 15:29:09,765 DEBUG [main] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4acb7ecc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:09,766 DEBUG [hconnection-0x63884e4-shared-pool55-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:09,767 DEBUG [hconnection-0x63884e4-shared-pool55-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:09,772 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53287; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:09,772 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:09,773 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53287 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:09,779 INFO [main] hbase.HBaseTestingUtility(1113): Minicluster is up 2016-12-02 15:29:09,789 DEBUG [main] ipc.RpcConnection(133): Use SIMPLE authentication for service MasterService, sasl=false 2016-12-02 15:29:09,789 DEBUG [main] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:09,794 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53288; connections=12, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:09,795 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:09,795 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53288 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:09,810 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/balancer 2016-12-02 15:29:09,811 INFO [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.MasterRpcServices(174): Client=tyu//10.10.9.179 set balanceSwitch=false 2016-12-02 15:29:10,046 INFO [main] hbase.ResourceChecker(148): before: replication.TestSerialReplication#testRegionMerge Thread=1264, OpenFileDescriptor=2704, MaxFileDescriptor=10240, SystemLoadAverage=455, ProcessCount=284, AvailableMemoryMB=2285 2016-12-02 15:29:10,047 WARN [main] hbase.ResourceChecker(135): Thread=1264 is superior to 500 2016-12-02 15:29:10,047 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=2704 is superior to 1024 2016-12-02 15:29:10,060 INFO [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.HMaster(1584): Client=tyu//10.10.9.179 create 'testRegionMerge', {NAME => 'f', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '2'} 2016-12-02 15:29:10,167 DEBUG [RpcServer idle connection scanner for port 52448] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52448: task running 2016-12-02 15:29:10,170 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] procedure2.ProcedureExecutor(706): Procedure CreateTableProcedure (table=testRegionMerge) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-12-02 15:29:10,180 DEBUG [ProcedureExecutorWorker-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/testRegionMerge/write-master:524480000000000 2016-12-02 15:29:10,190 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:10,289 DEBUG [RpcServer idle connection scanner for port 52450] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52450: task running 2016-12-02 15:29:10,325 DEBUG [RpcServer idle connection scanner for port 52454] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52454: task running 2016-12-02 15:29:10,330 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:10,340 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|RBW]]} size 0 2016-12-02 15:29:10,341 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:10,342 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741844_1020 size 323 2016-12-02 15:29:10,343 DEBUG [ProcedureExecutorWorker-4] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/.tmp/data/default/testRegionMerge/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:10,344 INFO [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(6406): creating HRegion testRegionMerge HTD == 'testRegionMerge', {NAME => 'f', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '2'} RootDir = hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/.tmp Table name == testRegionMerge 2016-12-02 15:29:10,356 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW]]} size 0 2016-12-02 15:29:10,357 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW]]} size 0 2016-12-02 15:29:10,358 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW]]} size 0 2016-12-02 15:29:10,359 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,359 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1486): Closing testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.: disabling compactions & flushes 2016-12-02 15:29:10,359 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,359 INFO [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1643): Closed testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,369 DEBUG [RpcServer idle connection scanner for port 52460] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52460: task running 2016-12-02 15:29:10,401 DEBUG [RpcServer idle connection scanner for port 52464] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52464: task running 2016-12-02 15:29:10,433 DEBUG [RpcServer idle connection scanner for port 52467] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52467: task running 2016-12-02 15:29:10,470 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1417): Put{"totalColumns":1,"row":"testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:10,473 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1614): Added 1 2016-12-02 15:29:10,489 DEBUG [RpcServer idle connection scanner for port 52473] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52473: task running 2016-12-02 15:29:10,522 DEBUG [RpcServer idle connection scanner for port 52476] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52476: task running 2016-12-02 15:29:10,535 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:10,550 DEBUG [RpcServer idle connection scanner for port 52479] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52479: task running 2016-12-02 15:29:10,583 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"testRegionMerge","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721350583}]}} 2016-12-02 15:29:10,583 DEBUG [RpcServer idle connection scanner for port 52482] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52482: task running 2016-12-02 15:29:10,586 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1768): Updated table testRegionMerge state to ENABLING in META 2016-12-02 15:29:10,595 DEBUG [ProcedureExecutorWorker-4] balancer.RegionLocationFinder(288): HDFSBlocksDistribution not found in cache for region testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,600 INFO [ProcedureExecutorWorker-4] master.AssignmentManager(1622): Bulk assigning 1 region(s) across 11 server(s), round-robin=true 2016-12-02 15:29:10,603 INFO [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] master.AssignmentManager(751): Assigning 1 region(s) to 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,603 DEBUG [ProcedureExecutorWorker-4] master.GeneralBulkAssigner(152): Timeout-on-RIT=391000 2016-12-02 15:29:10,604 INFO [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] master.RegionStates(1139): Transition {3e7435b86f12523b1a988d8de8c0f489 state=OFFLINE, ts=1480721350588, server=null} to {3e7435b86f12523b1a988d8de8c0f489 state=PENDING_OPEN, ts=1480721350604, server=10.10.9.179,52473,1480721340476} 2016-12-02 15:29:10,604 INFO [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. with state=PENDING_OPEN, sn=10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,605 DEBUG [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] master.ServerManager(968): New admin connection to 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,608 DEBUG [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:10,608 DEBUG [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52473 2016-12-02 15:29:10,612 DEBUG [RpcServer.listener,port=52473] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53373; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:10,613 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:10,613 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53373 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:10,615 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.RSRpcServices(1772): Open testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,615 DEBUG [RpcServer idle connection scanner for port 52485] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52485: task running 2016-12-02 15:29:10,618 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] regionserver.HRegion(6583): Opening region: {ENCODED => 3e7435b86f12523b1a988d8de8c0f489, NAME => 'testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:10,621 INFO [RS_OPEN_REGION-10.10.9.179:52473-0] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:10,621 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge 3e7435b86f12523b1a988d8de8c0f489 2016-12-02 15:29:10,623 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,624 INFO [StoreOpener-3e7435b86f12523b1a988d8de8c0f489-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:10,625 INFO [StoreOpener-3e7435b86f12523b1a988d8de8c0f489-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:10,625 INFO [StoreOpener-3e7435b86f12523b1a988d8de8c0f489-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:10,626 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/3e7435b86f12523b1a988d8de8c0f489 2016-12-02 15:29:10,630 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/3e7435b86f12523b1a988d8de8c0f489/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:10,630 INFO [RS_OPEN_REGION-10.10.9.179:52473-0] regionserver.HRegion(893): Onlined 3e7435b86f12523b1a988d8de8c0f489; next sequenceid=2 2016-12-02 15:29:10,634 INFO [PostOpenDeployTasks:3e7435b86f12523b1a988d8de8c0f489] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,639 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.AssignmentManager(2949): Got transition OPENED for {3e7435b86f12523b1a988d8de8c0f489 state=PENDING_OPEN, ts=1480721350604, server=10.10.9.179,52473,1480721340476} from 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,639 INFO [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.RegionStates(1139): Transition {3e7435b86f12523b1a988d8de8c0f489 state=PENDING_OPEN, ts=1480721350604, server=10.10.9.179,52473,1480721340476} to {3e7435b86f12523b1a988d8de8c0f489 state=OPEN, ts=1480721350639, server=10.10.9.179,52473,1480721340476} 2016-12-02 15:29:10,639 INFO [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. with state=OPEN, openSeqNum=2, server=10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,644 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.RegionStates(466): Onlined 3e7435b86f12523b1a988d8de8c0f489 on 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,644 DEBUG [10.10.9.179,52448,1480721340079-GeneralBulkAssigner-6] master.AssignmentManager(922): Bulk assigning done for 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,644 DEBUG [ProcedureExecutorWorker-4] master.GeneralBulkAssigner(128): bulk assigning total 1 regions to 11 servers, took 41ms, successfully 2016-12-02 15:29:10,644 INFO [ProcedureExecutorWorker-4] master.AssignmentManager(1629): Bulk assigning done 2016-12-02 15:29:10,644 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"testRegionMerge","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721350644}]}} 2016-12-02 15:29:10,645 DEBUG [PostOpenDeployTasks:3e7435b86f12523b1a988d8de8c0f489] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:10,648 DEBUG [RS_OPEN_REGION-10.10.9.179:52473-0] handler.OpenRegionHandler(126): Opened testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. on 10.10.9.179,52473,1480721340476 2016-12-02 15:29:10,648 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1768): Updated table testRegionMerge state to ENABLED in META 2016-12-02 15:29:10,839 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:10,971 DEBUG [ProcedureExecutorWorker-4] lock.ZKInterProcessLockBase(328): Released /1/table-lock/testRegionMerge/write-master:524480000000000 2016-12-02 15:29:10,971 DEBUG [ProcedureExecutorWorker-4] procedure2.ProcedureExecutor(987): Procedure completed in 800msec: CreateTableProcedure (table=testRegionMerge) id=4 owner=tyu state=FINISHED 2016-12-02 15:29:11,341 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=3,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:11,341 INFO [main] client.HBaseAdmin$TableFuture(3543): Operation: CREATE, Table Name: default:testRegionMerge completed 2016-12-02 15:29:11,342 DEBUG [main] ipc.RpcConnection(133): Use SIMPLE authentication for service MasterService, sasl=false 2016-12-02 15:29:11,342 DEBUG [main] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:11,348 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53452; connections=3, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:11,349 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:11,350 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53452 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:11,353 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.HMaster(1584): Client=tyu//10.10.9.179 create 'testRegionMerge', {NAME => 'f', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '2'} 2016-12-02 15:29:11,461 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] procedure2.ProcedureExecutor(706): Procedure CreateTableProcedure (table=testRegionMerge) id=4 owner=tyu state=RUNNABLE:CREATE_TABLE_PRE_OPERATION added to the store. 2016-12-02 15:29:11,470 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:11,470 DEBUG [ProcedureExecutorWorker-4] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/testRegionMerge/write-master:528870000000000 2016-12-02 15:29:11,574 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:11,590 INFO [IPC Server handler 9 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741835_1011{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 323 2016-12-02 15:29:11,777 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:11,992 DEBUG [ProcedureExecutorWorker-4] util.FSTableDescriptors(707): Wrote descriptor into: hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/.tmp/data/default/testRegionMerge/.tabledesc/.tableinfo.0000000001 2016-12-02 15:29:11,993 INFO [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(6406): creating HRegion testRegionMerge HTD == 'testRegionMerge', {NAME => 'f', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', IN_MEMORY_COMPACTION => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '2'} RootDir = hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/.tmp Table name == testRegionMerge 2016-12-02 15:29:12,008 INFO [IPC Server handler 0 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741836_1012{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 50 2016-12-02 15:29:12,082 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:12,410 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,411 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1486): Closing testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.: disabling compactions & flushes 2016-12-02 15:29:12,411 DEBUG [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,411 INFO [RegionOpenAndInitThread-testRegionMerge-1] regionserver.HRegion(1643): Closed testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,519 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1417): Put{"totalColumns":1,"row":"testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:12,522 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1614): Added 1 2016-12-02 15:29:12,585 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:12,627 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"testRegionMerge","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721352627}]}} 2016-12-02 15:29:12,629 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1768): Updated table testRegionMerge state to ENABLING in META 2016-12-02 15:29:12,631 INFO [ProcedureExecutorWorker-4] master.AssignmentManager(751): Assigning 1 region(s) to 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,632 INFO [ProcedureExecutorWorker-4] master.RegionStates(1139): Transition {cbf52d9c9c92e7bbb1af49b6d521d080 state=OFFLINE, ts=1480721352631, server=null} to {cbf52d9c9c92e7bbb1af49b6d521d080 state=PENDING_OPEN, ts=1480721352632, server=10.10.9.179,52893,1480721345952} 2016-12-02 15:29:12,632 INFO [ProcedureExecutorWorker-4] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. with state=PENDING_OPEN, sn=10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,636 DEBUG [ProcedureExecutorWorker-4] master.ServerManager(968): New admin connection to 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,636 DEBUG [ProcedureExecutorWorker-4] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:12,636 DEBUG [ProcedureExecutorWorker-4] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52893 2016-12-02 15:29:12,639 DEBUG [RpcServer.listener,port=52893] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:53581; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:12,640 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:12,643 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 53581 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:12,646 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52893] regionserver.RSRpcServices(1772): Open testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,651 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] regionserver.HRegion(6583): Opening region: {ENCODED => cbf52d9c9c92e7bbb1af49b6d521d080, NAME => 'testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:12,652 INFO [RS_OPEN_REGION-10.10.9.179:52893-0] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:12,652 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge cbf52d9c9c92e7bbb1af49b6d521d080 2016-12-02 15:29:12,652 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,654 INFO [StoreOpener-cbf52d9c9c92e7bbb1af49b6d521d080-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:12,654 INFO [StoreOpener-cbf52d9c9c92e7bbb1af49b6d521d080-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:12,654 INFO [StoreOpener-cbf52d9c9c92e7bbb1af49b6d521d080-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:12,655 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080 2016-12-02 15:29:12,663 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:12,663 INFO [RS_OPEN_REGION-10.10.9.179:52893-0] regionserver.HRegion(893): Onlined cbf52d9c9c92e7bbb1af49b6d521d080; next sequenceid=2 2016-12-02 15:29:12,667 INFO [PostOpenDeployTasks:cbf52d9c9c92e7bbb1af49b6d521d080] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,671 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.AssignmentManager(2949): Got transition OPENED for {cbf52d9c9c92e7bbb1af49b6d521d080 state=PENDING_OPEN, ts=1480721352632, server=10.10.9.179,52893,1480721345952} from 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,671 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.RegionStates(1139): Transition {cbf52d9c9c92e7bbb1af49b6d521d080 state=PENDING_OPEN, ts=1480721352632, server=10.10.9.179,52893,1480721345952} to {cbf52d9c9c92e7bbb1af49b6d521d080 state=OPEN, ts=1480721352671, server=10.10.9.179,52893,1480721345952} 2016-12-02 15:29:12,671 INFO [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. with state=OPEN, openSeqNum=2, server=10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,676 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.RegionStates(466): Onlined cbf52d9c9c92e7bbb1af49b6d521d080 on 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,676 DEBUG [ProcedureExecutorWorker-4] master.AssignmentManager(922): Bulk assigning done for 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,676 DEBUG [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1398): Put{"totalColumns":1,"row":"testRegionMerge","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1480721352676}]}} 2016-12-02 15:29:12,677 DEBUG [PostOpenDeployTasks:cbf52d9c9c92e7bbb1af49b6d521d080] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:12,679 DEBUG [RS_OPEN_REGION-10.10.9.179:52893-0] handler.OpenRegionHandler(126): Opened testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. on 10.10.9.179,52893,1480721345952 2016-12-02 15:29:12,680 INFO [ProcedureExecutorWorker-4] hbase.MetaTableAccessor(1768): Updated table testRegionMerge state to ENABLED in META 2016-12-02 15:29:12,884 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-12-02 15:29:12,996 DEBUG [ProcedureExecutorWorker-4] lock.ZKInterProcessLockBase(328): Released /2/table-lock/testRegionMerge/write-master:528870000000000 2016-12-02 15:29:12,996 DEBUG [ProcedureExecutorWorker-4] procedure2.ProcedureExecutor(987): Procedure completed in 1.5390sec: CreateTableProcedure (table=testRegionMerge) id=4 owner=tyu state=FINISHED 2016-12-02 15:29:13,298 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:13,298 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:13,592 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=4,queue=0,port=52887] master.MasterRpcServices(965): Checking to see if procedure is done procId=4 2016-12-02 15:29:13,592 INFO [main] client.HBaseAdmin$TableFuture(3543): Operation: CREATE, Table Name: default:testRegionMerge completed 2016-12-02 15:29:13,805 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:14,306 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:15,923 DEBUG [RpcServer idle connection scanner for port 52887] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52887: task running 2016-12-02 15:29:15,961 DEBUG [RpcServer idle connection scanner for port 52893] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52893: task running 2016-12-02 15:29:18,164 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-12-02 15:29:18,596 INFO [main] zookeeper.RecoverableZooKeeper(120): Process identifier=hbase-admin-on-hconnection-0x3dd818e8 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:18,600 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): hbase-admin-on-hconnection-0x3dd818e80x0, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:18,600 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(529): hbase-admin-on-hconnection-0x3dd818e8-0x158c1de825b0045 connected 2016-12-02 15:29:18,603 DEBUG [main] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:18,604 DEBUG [main] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52473 2016-12-02 15:29:18,608 DEBUG [RpcServer.listener,port=52473] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54200; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:18,609 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:18,609 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54200 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:18,610 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.RSRpcServices(2088): Splitting testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:18,614 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.CompactSplitThread(265): Split requested for testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.. compaction_queue=(0:0), split_queue=0, merge_queue=0 2016-12-02 15:29:18,621 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52448] master.HMaster(1479): Client=tyu.hfs.5/10.10.9.179/10.10.9.179 Split region {ENCODED => 3e7435b86f12523b1a988d8de8c0f489, NAME => 'testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:18,736 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52448] procedure2.ProcedureExecutor(706): Procedure SplitTableRegionProcedure (table=testRegionMerge parent region={ENCODED => 3e7435b86f12523b1a988d8de8c0f489, NAME => 'testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.', STARTKEY => '', ENDKEY => ''} first daughter region={ENCODED => 77b3f337f846c19e5ea9c885289510ac, NAME => 'testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.', STARTKEY => '', ENDKEY => 'r50'} and second daughter region={ENCODED => 6ef423c0591830e60ab18a766b7caf14, NAME => 'testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.', STARTKEY => 'r50', ENDKEY => ''}) id=5 owner=tyu.hfs.5 state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE added to the store. 2016-12-02 15:29:18,744 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=5 2016-12-02 15:29:18,746 DEBUG [ProcedureExecutorWorker-5] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/testRegionMerge/read-master:524480000000001 2016-12-02 15:29:18,963 DEBUG [ProcedureExecutorWorker-5] master.AssignmentManager(2949): Got transition READY_TO_SPLIT for {3e7435b86f12523b1a988d8de8c0f489 state=OPEN, ts=1480721350639, server=10.10.9.179,52473,1480721340476} from 10.10.9.179,52473,1480721340476 2016-12-02 15:29:18,966 INFO [ProcedureExecutorWorker-5] master.RegionStates(1139): Transition {3e7435b86f12523b1a988d8de8c0f489 state=OPEN, ts=1480721350639, server=10.10.9.179,52473,1480721340476} to {3e7435b86f12523b1a988d8de8c0f489 state=SPLITTING, ts=1480721358966, server=10.10.9.179,52473,1480721340476} 2016-12-02 15:29:18,966 INFO [ProcedureExecutorWorker-5] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. with state=SPLITTING 2016-12-02 15:29:19,088 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.RSRpcServices(1405): Close and offline [3e7435b86f12523b1a988d8de8c0f489] regions. 2016-12-02 15:29:19,088 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.HRegion(1486): Closing testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.: disabling compactions & flushes 2016-12-02 15:29:19,088 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:19,088 INFO [StoreCloserThread-testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:19,095 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/3e7435b86f12523b1a988d8de8c0f489/recovered.edits/5.seqid to file, newSeqId=5, maxSeqId=2 2016-12-02 15:29:19,096 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:19,099 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] regionserver.HRegion(1643): Closed testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. 2016-12-02 15:29:19,100 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52473] hbase.MetaTableAccessor(1398): Put{"totalColumns":2,"row":"3e7435b86f12523b1a988d8de8c0f489","families":{"rep_meta":[{"qualifier":"_TABLENAME_","vlen":15,"tag":[],"timestamp":9223372036854775807}],"rep_barrier":[{"qualifier":"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:19,101 DEBUG [hconnection-0x7821a1f-shared-pool56-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,101 DEBUG [hconnection-0x7821a1f-shared-pool56-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:19,105 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54253; connections=13, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,106 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.5 (auth:SIMPLE) 2016-12-02 15:29:19,106 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54253 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,131 DEBUG [RS:5;10.10.9.179:52473.replicationSource.10.10.9.179%2C52473%2C1480721340476,1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,132 DEBUG [RS:5;10.10.9.179:52473.replicationSource.10.10.9.179%2C52473%2C1480721340476,1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:19,136 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54256; connections=14, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,136 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.5 (auth:SIMPLE) 2016-12-02 15:29:19,136 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54256 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,450 DEBUG [ProcedureExecutorWorker-5] master.AssignmentManager(2949): Got transition SPLIT_PONR for {3e7435b86f12523b1a988d8de8c0f489 state=SPLITTING, ts=1480721358966, server=10.10.9.179,52473,1480721340476} from 10.10.9.179,52473,1480721340476 2016-12-02 15:29:19,451 DEBUG [ProcedureExecutorWorker-5] hbase.MetaTableAccessor(1813): Put{"totalColumns":3,"row":"testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":1480721359450},{"qualifier":"splitA","vlen":52,"tag":[],"timestamp":1480721359450},{"qualifier":"splitB","vlen":52,"tag":[],"timestamp":1480721359450}]}}, Put{"totalColumns":4,"row":"testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.","families":{"info":[{"qualifier":"regioninfo","vlen":52,"tag":[],"timestamp":1480721359451},{"qualifier":"server","vlen":17,"tag":[],"timestamp":1480721359451},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":1480721359451},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":1480721359451}]}}, Put{"totalColumns":4,"row":"testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.","families":{"info":[{"qualifier":"regioninfo","vlen":52,"tag":[],"timestamp":1480721359451},{"qualifier":"server","vlen":17,"tag":[],"timestamp":1480721359451},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":1480721359451},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":1480721359451}]}}, Put{"totalColumns":1,"row":"3e7435b86f12523b1a988d8de8c0f489","families":{"rep_meta":[{"qualifier":"_DAUGHTER_","vlen":65,"tag":[],"timestamp":9223372036854775807}]}}, Put{"totalColumns":1,"row":"77b3f337f846c19e5ea9c885289510ac","families":{"rep_meta":[{"qualifier":"_PARENT_","vlen":32,"tag":[],"timestamp":9223372036854775807}]}}, Put{"totalColumns":1,"row":"6ef423c0591830e60ab18a766b7caf14","families":{"rep_meta":[{"qualifier":"_PARENT_","vlen":32,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:19,599 DEBUG [hconnection-0x3dd818e8-shared-pool43-t25] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,600 DEBUG [hconnection-0x3dd818e8-shared-pool43-t25] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52473 2016-12-02 15:29:19,604 DEBUG [RpcServer.listener,port=52473] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54301; connections=3, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,604 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:19,604 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54301 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,749 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=5 2016-12-02 15:29:19,758 INFO [ProcedureExecutorWorker-5] master.RegionStates(1139): Transition {3e7435b86f12523b1a988d8de8c0f489 state=SPLITTING, ts=1480721358966, server=10.10.9.179,52473,1480721340476} to {3e7435b86f12523b1a988d8de8c0f489 state=SPLIT, ts=1480721359758, server=10.10.9.179,52473,1480721340476} 2016-12-02 15:29:19,758 INFO [ProcedureExecutorWorker-5] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489. with state=SPLIT 2016-12-02 15:29:19,760 INFO [ProcedureExecutorWorker-5] master.RegionStates(604): Offlined 3e7435b86f12523b1a988d8de8c0f489 from 10.10.9.179,52473,1480721340476 2016-12-02 15:29:19,760 INFO [ProcedureExecutorWorker-5] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=SPLITTING_NEW, ts=1480721358969, server=10.10.9.179,52473,1480721340476} to {77b3f337f846c19e5ea9c885289510ac state=OFFLINE, ts=1480721359760, server=null} 2016-12-02 15:29:19,760 INFO [ProcedureExecutorWorker-5] master.RegionStates(1139): Transition {6ef423c0591830e60ab18a766b7caf14 state=SPLITTING_NEW, ts=1480721358969, server=10.10.9.179,52473,1480721340476} to {6ef423c0591830e60ab18a766b7caf14 state=OFFLINE, ts=1480721359760, server=null} 2016-12-02 15:29:19,763 DEBUG [AM.-pool3-t2] balancer.RegionLocationFinder(288): HDFSBlocksDistribution not found in cache for region testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:19,764 DEBUG [AM.-pool3-t2] master.AssignmentManager(1321): No previous transition plan found (or ignoring an existing plan) for testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.; generated random plan=hri=testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14., src=, dest=10.10.9.179,52460,1480721340350; 11 (online=11) available servers, forceNewPlan=false 2016-12-02 15:29:19,764 INFO [AM.-pool3-t2] master.AssignmentManager(1105): Assigning testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. to 10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,764 INFO [AM.-pool3-t2] master.RegionStates(1139): Transition {6ef423c0591830e60ab18a766b7caf14 state=OFFLINE, ts=1480721359760, server=null} to {6ef423c0591830e60ab18a766b7caf14 state=PENDING_OPEN, ts=1480721359764, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:19,764 DEBUG [AM.-pool3-t1] balancer.RegionLocationFinder(288): HDFSBlocksDistribution not found in cache for region testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:19,764 INFO [AM.-pool3-t2] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. with state=PENDING_OPEN, sn=10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,765 DEBUG [AM.-pool3-t1] master.AssignmentManager(1321): No previous transition plan found (or ignoring an existing plan) for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.; generated random plan=hri=testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac., src=, dest=10.10.9.179,52464,1480721340388; 11 (online=11) available servers, forceNewPlan=false 2016-12-02 15:29:19,765 INFO [AM.-pool3-t1] master.AssignmentManager(1105): Assigning testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. to 10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,765 INFO [AM.-pool3-t1] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=OFFLINE, ts=1480721359760, server=null} to {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721359765, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:19,765 INFO [AM.-pool3-t1] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=PENDING_OPEN, sn=10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,765 DEBUG [AM.-pool3-t2] master.ServerManager(968): New admin connection to 10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,766 DEBUG [AM.-pool3-t2] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:19,766 DEBUG [AM.-pool3-t1] master.ServerManager(968): New admin connection to 10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,766 DEBUG [AM.-pool3-t2] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52460 2016-12-02 15:29:19,766 DEBUG [AM.-pool3-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:19,766 DEBUG [AM.-pool3-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52464 2016-12-02 15:29:19,770 DEBUG [RpcServer.listener,port=52464] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54321; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,770 DEBUG [RpcServer.listener,port=52460] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54322; connections=1, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,770 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:19,770 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:19,770 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54321 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,771 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54322 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,774 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52464] regionserver.RSRpcServices(1772): Open testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:19,774 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.RSRpcServices(1772): Open testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:19,779 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.HRegion(6583): Opening region: {ENCODED => 77b3f337f846c19e5ea9c885289510ac, NAME => 'testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.', STARTKEY => '', ENDKEY => 'r50'} 2016-12-02 15:29:19,779 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.HRegion(6583): Opening region: {ENCODED => 6ef423c0591830e60ab18a766b7caf14, NAME => 'testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.', STARTKEY => 'r50', ENDKEY => ''} 2016-12-02 15:29:19,779 INFO [RS_OPEN_REGION-10.10.9.179:52464-0] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:19,779 INFO [RS_OPEN_REGION-10.10.9.179:52460-0] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:19,780 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge 77b3f337f846c19e5ea9c885289510ac 2016-12-02 15:29:19,780 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge 6ef423c0591830e60ab18a766b7caf14 2016-12-02 15:29:19,781 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:19,782 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.HRegion(743): Instantiated testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:19,783 WARN [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.HRegionFileSystem(823): .regioninfo file not found for region: 6ef423c0591830e60ab18a766b7caf14 on table testRegionMerge 2016-12-02 15:29:19,783 WARN [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.HRegionFileSystem(823): .regioninfo file not found for region: 77b3f337f846c19e5ea9c885289510ac on table testRegionMerge 2016-12-02 15:29:19,794 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW]]} size 0 2016-12-02 15:29:19,795 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW]]} size 0 2016-12-02 15:29:19,795 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW]]} size 0 2016-12-02 15:29:19,796 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 0 2016-12-02 15:29:19,797 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:19,797 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 0 2016-12-02 15:29:19,800 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:19,800 INFO [StoreOpener-6ef423c0591830e60ab18a766b7caf14-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:19,800 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:19,800 INFO [StoreOpener-6ef423c0591830e60ab18a766b7caf14-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:19,800 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:19,800 INFO [StoreOpener-6ef423c0591830e60ab18a766b7caf14-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:19,801 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac 2016-12-02 15:29:19,801 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14 2016-12-02 15:29:19,804 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:19,804 INFO [RS_OPEN_REGION-10.10.9.179:52460-0] regionserver.HRegion(893): Onlined 6ef423c0591830e60ab18a766b7caf14; next sequenceid=2 2016-12-02 15:29:19,804 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/recovered.edits/2.seqid to file, newSeqId=2, maxSeqId=0 2016-12-02 15:29:19,810 INFO [RS_OPEN_REGION-10.10.9.179:52464-0] regionserver.HRegion(893): Onlined 77b3f337f846c19e5ea9c885289510ac; next sequenceid=2 2016-12-02 15:29:19,816 INFO [PostOpenDeployTasks:6ef423c0591830e60ab18a766b7caf14] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:19,816 INFO [PostOpenDeployTasks:77b3f337f846c19e5ea9c885289510ac] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:19,823 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.AssignmentManager(2949): Got transition OPENED for {6ef423c0591830e60ab18a766b7caf14 state=PENDING_OPEN, ts=1480721359764, server=10.10.9.179,52460,1480721340350} from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,823 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.AssignmentManager(2949): Got transition OPENED for {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721359765, server=10.10.9.179,52464,1480721340388} from 10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,824 INFO [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.RegionStates(1139): Transition {6ef423c0591830e60ab18a766b7caf14 state=PENDING_OPEN, ts=1480721359764, server=10.10.9.179,52460,1480721340350} to {6ef423c0591830e60ab18a766b7caf14 state=OPEN, ts=1480721359823, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:19,824 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721359765, server=10.10.9.179,52464,1480721340388} to {77b3f337f846c19e5ea9c885289510ac state=OPEN, ts=1480721359824, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:19,824 INFO [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. with state=OPEN, openSeqNum=2, server=10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,824 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=OPEN, openSeqNum=2, server=10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,826 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=2,queue=0,port=52448] master.RegionStates(466): Onlined 6ef423c0591830e60ab18a766b7caf14 on 10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,826 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(466): Onlined 77b3f337f846c19e5ea9c885289510ac on 10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,827 DEBUG [PostOpenDeployTasks:6ef423c0591830e60ab18a766b7caf14] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:19,827 DEBUG [PostOpenDeployTasks:77b3f337f846c19e5ea9c885289510ac] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:19,830 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-0] handler.OpenRegionHandler(126): Opened testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. on 10.10.9.179,52460,1480721340350 2016-12-02 15:29:19,832 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-0] handler.OpenRegionHandler(126): Opened testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. on 10.10.9.179,52464,1480721340388 2016-12-02 15:29:19,926 DEBUG [hconnection-0x3dd818e8-shared-pool43-t25] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,927 DEBUG [hconnection-0x3dd818e8-shared-pool43-t25] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52464 2016-12-02 15:29:19,931 DEBUG [RpcServer.listener,port=52464] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54342; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,932 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:19,932 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54342 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,945 DEBUG [hconnection-0x3dd818e8-shared-pool43-t29] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,945 DEBUG [hconnection-0x3dd818e8-shared-pool43-t29] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52460 2016-12-02 15:29:19,949 DEBUG [RpcServer.listener,port=52460] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54343; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,949 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:19,949 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54343 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,960 DEBUG [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,960 DEBUG [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:19,963 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54352; connections=15, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,964 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.2 (auth:SIMPLE) 2016-12-02 15:29:19,964 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54352 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,982 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.HMaster(1453): Client=tyu//10.10.9.179 Merge regions 77b3f337f846c19e5ea9c885289510ac and 6ef423c0591830e60ab18a766b7caf14 2016-12-02 15:29:19,987 DEBUG [pool-283-thread-1] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:19,987 DEBUG [pool-283-thread-1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:19,991 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54354; connections=4, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,992 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.2 (auth:SIMPLE) 2016-12-02 15:29:19,993 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54354 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,993 DEBUG [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:19,993 DEBUG [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:19,995 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:19,996 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54355; connections=16, queued calls size (bytes)=131, general queued calls=0, priority queued calls=0 2016-12-02 15:29:19,997 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:19,997 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.3 (auth:SIMPLE) 2016-12-02 15:29:19,997 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 0 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721419993 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:19,997 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54355 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:19,998 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,007 DEBUG [pool-288-thread-1] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:20,007 DEBUG [pool-288-thread-1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:20,010 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54357; connections=5, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:20,011 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.3 (auth:SIMPLE) 2016-12-02 15:29:20,011 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54357 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:20,011 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,011 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,011 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 0 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721420011 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,011 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,039 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] procedure2.ProcedureExecutor(706): Procedure MergeTableRegionsProcedure (table=testRegionMerge regions=[testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac., testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. ] forcible=true) id=6 owner=tyu state=RUNNABLE:MERGE_TABLE_REGIONS_PREPARE added to the store. 2016-12-02 15:29:20,113 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,113 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,113 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 1 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721420113 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,114 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,123 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,123 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,123 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 1 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721420123 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,123 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,143 INFO [ProcedureExecutorWorker-6] procedure.MergeTableRegionsProcedure(475): Moving regions to same server for merge: hri=testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac., src=10.10.9.179,52464,1480721340388, dest=10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,143 DEBUG [ProcedureExecutorWorker-5] procedure2.ProcedureExecutor(987): Procedure completed in 1.4140sec: SplitTableRegionProcedure (table=testRegionMerge parent region={ENCODED => 3e7435b86f12523b1a988d8de8c0f489, NAME => 'testRegionMerge,,1480721350056.3e7435b86f12523b1a988d8de8c0f489.', STARTKEY => '', ENDKEY => ''} first daughter region={ENCODED => 77b3f337f846c19e5ea9c885289510ac, NAME => 'testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.', STARTKEY => '', ENDKEY => 'r50'} and second daughter region={ENCODED => 6ef423c0591830e60ab18a766b7caf14, NAME => 'testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.', STARTKEY => 'r50', ENDKEY => ''}) id=5 owner=tyu.hfs.5 state=FINISHED 2016-12-02 15:29:20,144 DEBUG [ProcedureExecutorWorker-6] master.AssignmentManager(1382): Starting unassign of testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. (offlining), current state: {77b3f337f846c19e5ea9c885289510ac state=OPEN, ts=1480721359824, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:20,144 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=OPEN, ts=1480721359824, server=10.10.9.179,52464,1480721340388} to {77b3f337f846c19e5ea9c885289510ac state=PENDING_CLOSE, ts=1480721360144, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:20,144 INFO [ProcedureExecutorWorker-6] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=PENDING_CLOSE 2016-12-02 15:29:20,151 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52464] regionserver.RSRpcServices(1375): Close 77b3f337f846c19e5ea9c885289510ac, moving to 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,153 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] handler.CloseRegionHandler(90): Processing close of testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,155 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegion(1486): Closing testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.: disabling compactions & flushes 2016-12-02 15:29:20,155 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,155 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=104 B 2016-12-02 15:29:20,157 DEBUG [ProcedureExecutorWorker-6] master.AssignmentManager(955): Sent CLOSE to 10.10.9.179,52464,1480721340388 for region testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,167 DEBUG [RpcServer idle connection scanner for port 52448] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52448: task running 2016-12-02 15:29:20,240 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:20,241 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|FINALIZED], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|FINALIZED]]} size 0 2016-12-02 15:29:20,242 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741848_1024 size 4926 2016-12-02 15:29:20,243 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=8, memsize=104, hasBloomFilter=true, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/.tmp/7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:20,287 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/.tmp/7d32bddca91e4f98abab98f3a0fb587e as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/f/7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:20,289 DEBUG [RpcServer idle connection scanner for port 52450] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52450: task running 2016-12-02 15:29:20,291 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/f/7d32bddca91e4f98abab98f3a0fb587e, entries=4, sequenceid=8, filesize=4.8 K 2016-12-02 15:29:20,292 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegion(2644): Finished memstore flush of ~104 B/104, currentsize=0 B/0 for region testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. in 137ms, sequenceid=8, compaction requested=false 2016-12-02 15:29:20,297 INFO [StoreCloserThread-testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:20,301 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/recovered.edits/11.seqid to file, newSeqId=11, maxSeqId=2 2016-12-02 15:29:20,302 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:20,302 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegion(1643): Closed testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,302 INFO [RS_CLOSE_REGION-10.10.9.179:52464-0] regionserver.HRegionServer(3285): Adding moved region record: 77b3f337f846c19e5ea9c885289510ac to 10.10.9.179,52460,1480721340350 as of 8 2016-12-02 15:29:20,303 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.AssignmentManager(2949): Got transition CLOSED for {77b3f337f846c19e5ea9c885289510ac state=PENDING_CLOSE, ts=1480721360144, server=10.10.9.179,52464,1480721340388} from 10.10.9.179,52464,1480721340388 2016-12-02 15:29:20,303 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=PENDING_CLOSE, ts=1480721360144, server=10.10.9.179,52464,1480721340388} to {77b3f337f846c19e5ea9c885289510ac state=CLOSED, ts=1480721360303, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:20,304 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=CLOSED 2016-12-02 15:29:20,306 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-0] handler.CloseRegionHandler(122): Closed testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,313 DEBUG [AM.-pool3-t4] master.AssignmentManager(1285): Found an existing plan for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. destination server is 10.10.9.179,52460,1480721340350 accepted as a dest server = true 2016-12-02 15:29:20,313 DEBUG [AM.-pool3-t4] master.AssignmentManager(1330): Using pre-existing plan for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.; plan=hri=testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac., src=10.10.9.179,52464,1480721340388, dest=10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,313 INFO [AM.-pool3-t4] master.AssignmentManager(1105): Assigning testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. to 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,313 INFO [AM.-pool3-t4] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=CLOSED, ts=1480721360303, server=10.10.9.179,52464,1480721340388} to {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721360313, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:20,313 INFO [AM.-pool3-t4] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=PENDING_OPEN, sn=10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,316 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.RSRpcServices(1772): Open testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,319 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] regionserver.HRegion(6583): Opening region: {ENCODED => 77b3f337f846c19e5ea9c885289510ac, NAME => 'testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.', STARTKEY => '', ENDKEY => 'r50'} 2016-12-02 15:29:20,319 INFO [RS_OPEN_REGION-10.10.9.179:52460-1] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:20,319 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge 77b3f337f846c19e5ea9c885289510ac 2016-12-02 15:29:20,320 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,321 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:20,321 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:20,321 INFO [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:20,322 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,322 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,322 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 2 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721420322 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,323 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,328 DEBUG [RpcServer idle connection scanner for port 52454] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52454: task running 2016-12-02 15:29:20,328 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,328 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,328 DEBUG [StoreOpener-77b3f337f846c19e5ea9c885289510ac-1] regionserver.HStore(532): loaded hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/f/7d32bddca91e4f98abab98f3a0fb587e, isReference=false, isBulkLoadResult=false, seqid=8, majorCompaction=false 2016-12-02 15:29:20,328 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 2 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721420328 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,329 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,333 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac 2016-12-02 15:29:20,336 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/recovered.edits/12.seqid to file, newSeqId=12, maxSeqId=11 2016-12-02 15:29:20,336 INFO [RS_OPEN_REGION-10.10.9.179:52460-1] regionserver.HRegion(893): Onlined 77b3f337f846c19e5ea9c885289510ac; next sequenceid=12 2016-12-02 15:29:20,338 INFO [PostOpenDeployTasks:77b3f337f846c19e5ea9c885289510ac] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,340 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.AssignmentManager(2949): Got transition OPENED for {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721360313, server=10.10.9.179,52460,1480721340350} from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,340 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=PENDING_OPEN, ts=1480721360313, server=10.10.9.179,52460,1480721340350} to {77b3f337f846c19e5ea9c885289510ac state=OPEN, ts=1480721360340, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:20,340 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=OPEN, openSeqNum=12, server=10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,342 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(466): Onlined 77b3f337f846c19e5ea9c885289510ac on 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,342 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(476): Offlined 77b3f337f846c19e5ea9c885289510ac from 10.10.9.179,52464,1480721340388 2016-12-02 15:29:20,342 DEBUG [PostOpenDeployTasks:77b3f337f846c19e5ea9c885289510ac] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,344 DEBUG [RS_OPEN_REGION-10.10.9.179:52460-1] handler.OpenRegionHandler(126): Opened testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. on 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,370 DEBUG [RpcServer idle connection scanner for port 52460] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52460: task running 2016-12-02 15:29:20,401 DEBUG [RpcServer idle connection scanner for port 52464] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52464: task running 2016-12-02 15:29:20,435 DEBUG [RpcServer idle connection scanner for port 52467] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52467: task running 2016-12-02 15:29:20,489 DEBUG [RpcServer idle connection scanner for port 52473] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52473: task running 2016-12-02 15:29:20,523 DEBUG [RpcServer idle connection scanner for port 52476] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52476: task running 2016-12-02 15:29:20,551 DEBUG [RpcServer idle connection scanner for port 52479] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52479: task running 2016-12-02 15:29:20,571 DEBUG [ProcedureExecutorWorker-6] master.AssignmentManager(2949): Got transition READY_TO_MERGE for 0115985df04bcc343330799dd037ce66 from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:20,572 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {6ef423c0591830e60ab18a766b7caf14 state=OPEN, ts=1480721359823, server=10.10.9.179,52460,1480721340350} to {6ef423c0591830e60ab18a766b7caf14 state=MERGING, ts=1480721360572, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:20,572 INFO [ProcedureExecutorWorker-6] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. with state=MERGING 2016-12-02 15:29:20,576 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=OPEN, ts=1480721360340, server=10.10.9.179,52460,1480721340350} to {77b3f337f846c19e5ea9c885289510ac state=MERGING, ts=1480721360574, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:20,576 INFO [ProcedureExecutorWorker-6] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. with state=MERGING 2016-12-02 15:29:20,584 DEBUG [RpcServer idle connection scanner for port 52482] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52482: task running 2016-12-02 15:29:20,615 DEBUG [RpcServer idle connection scanner for port 52485] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52485: task running 2016-12-02 15:29:20,630 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,631 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,631 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 3 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721420630 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,632 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,636 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,636 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:20,636 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 3 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721420636 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:20,637 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:20,683 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.RSRpcServices(1405): Close and offline [6ef423c0591830e60ab18a766b7caf14, 77b3f337f846c19e5ea9c885289510ac] regions. 2016-12-02 15:29:20,686 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1486): Closing testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.: disabling compactions & flushes 2016-12-02 15:29:20,686 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:20,686 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=130 B 2016-12-02 15:29:20,702 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:20,703 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:20,704 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|FINALIZED], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:20,705 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=9, memsize=130, hasBloomFilter=true, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/.tmp/b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:20,713 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/.tmp/b9d9a5aa40c343a1b265b67fc35cef21 as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/f/b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:20,716 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/f/b9d9a5aa40c343a1b265b67fc35cef21, entries=5, sequenceid=9, filesize=4.8 K 2016-12-02 15:29:20,719 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(2644): Finished memstore flush of ~130 B/130, currentsize=0 B/0 for region testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. in 33ms, sequenceid=9, compaction requested=false 2016-12-02 15:29:20,722 INFO [StoreCloserThread-testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:20,725 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/recovered.edits/12.seqid to file, newSeqId=12, maxSeqId=2 2016-12-02 15:29:20,726 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:20,726 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1643): Closed testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. 2016-12-02 15:29:20,727 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] hbase.MetaTableAccessor(1398): Put{"totalColumns":2,"row":"6ef423c0591830e60ab18a766b7caf14","families":{"rep_meta":[{"qualifier":"_TABLENAME_","vlen":15,"tag":[],"timestamp":9223372036854775807}],"rep_barrier":[{"qualifier":"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0A","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:20,728 DEBUG [hconnection-0x7130b450-shared-pool58-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:20,728 DEBUG [hconnection-0x7130b450-shared-pool58-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52448 2016-12-02 15:29:20,731 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54420; connections=17, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:20,732 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.2 (auth:SIMPLE) 2016-12-02 15:29:20,732 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54420 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:20,735 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1486): Closing testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.: disabling compactions & flushes 2016-12-02 15:29:20,735 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,738 INFO [StoreCloserThread-testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:20,741 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/recovered.edits/14.seqid to file, newSeqId=14, maxSeqId=12 2016-12-02 15:29:20,741 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:20,741 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] regionserver.HRegion(1643): Closed testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac. 2016-12-02 15:29:20,741 DEBUG [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52460] hbase.MetaTableAccessor(1398): Put{"totalColumns":2,"row":"77b3f337f846c19e5ea9c885289510ac","families":{"rep_meta":[{"qualifier":"_TABLENAME_","vlen":15,"tag":[],"timestamp":9223372036854775807}],"rep_barrier":[{"qualifier":"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0C","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:20,755 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.MasterRpcServices(965): Checking to see if procedure is done procId=5 2016-12-02 15:29:20,856 INFO [ProcedureExecutorWorker-6] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:20,874 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:20,874 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:20,875 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|FINALIZED], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|FINALIZED]]} size 0 2016-12-02 15:29:20,879 INFO [ProcedureExecutorWorker-6] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:20,892 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW]]} size 0 2016-12-02 15:29:20,893 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|FINALIZED]]} size 0 2016-12-02 15:29:20,894 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|FINALIZED], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|FINALIZED]]} size 0 2016-12-02 15:29:21,038 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:21,038 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:21,038 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 4 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721421038 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:21,038 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:21,043 DEBUG [pool-288-thread-5] ipc.RpcConnection(133): Use SIMPLE authentication for service AdminService, sasl=false 2016-12-02 15:29:21,043 DEBUG [pool-288-thread-5] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52893 2016-12-02 15:29:21,046 DEBUG [RpcServer.listener,port=52893] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54451; connections=2, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:21,047 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.3 (auth:SIMPLE) 2016-12-02 15:29:21,047 INFO [RpcServer.reader=2,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54451 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:21,051 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(207): Started replicating mutations. 2016-12-02 15:29:21,051 INFO [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7f7f48a4 connecting to ZooKeeper ensemble=localhost:60648 2016-12-02 15:29:21,053 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7f7f48a40x0, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2016-12-02 15:29:21,053 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893-EventThread] zookeeper.ZooKeeperWatcher(529): hconnection-0x7f7f48a4-0x158c1de825b0046 connected 2016-12-02 15:29:21,054 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] ipc.AbstractRpcClient(197): Codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags@5665ea8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2016-12-02 15:29:21,056 DEBUG [hconnection-0x7f7f48a4-metaLookup-shared--pool60-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:21,056 DEBUG [hconnection-0x7f7f48a4-metaLookup-shared--pool60-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52887 2016-12-02 15:29:21,066 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54453; connections=6, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:21,067 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.10 (auth:SIMPLE) 2016-12-02 15:29:21,067 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54453 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:21,071 DEBUG [hconnection-0x7f7f48a4-shared-pool59-t1] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:21,071 DEBUG [hconnection-0x7f7f48a4-shared-pool59-t1] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52893 2016-12-02 15:29:21,072 DEBUG [RpcServer.listener,port=52893] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54454; connections=3, queued calls size (bytes)=341, general queued calls=0, priority queued calls=0 2016-12-02 15:29:21,073 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1936): Auth successful for tyu.hfs.10 (auth:SIMPLE) 2016-12-02 15:29:21,073 INFO [RpcServer.reader=0,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54454 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:21,075 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(211): Finished replicating mutations. 2016-12-02 15:29:21,105 DEBUG [ProcedureExecutorWorker-6] master.AssignmentManager(2949): Got transition MERGE_PONR for {0115985df04bcc343330799dd037ce66 state=MERGING_NEW, ts=1480721360578, server=10.10.9.179,52460,1480721340350} from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:21,106 DEBUG [ProcedureExecutorWorker-6] hbase.MetaTableAccessor(1813): Put{"totalColumns":6,"row":"testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":1480721361105},{"qualifier":"mergeA","vlen":52,"tag":[],"timestamp":1480721361105},{"qualifier":"mergeB","vlen":52,"tag":[],"timestamp":1480721361105},{"qualifier":"server","vlen":17,"tag":[],"timestamp":1480721361105}]}}, Delete{"totalColumns":1,"row":"testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":1480721361105}]},"ts":9223372036854775807}, Delete{"totalColumns":1,"row":"testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":1480721361105}]},"ts":9223372036854775807}, Put{"totalColumns":1,"row":"6ef423c0591830e60ab18a766b7caf14","families":{"rep_meta":[{"qualifier":"_DAUGHTER_","vlen":32,"tag":[],"timestamp":9223372036854775807}]}}, Put{"totalColumns":1,"row":"77b3f337f846c19e5ea9c885289510ac","families":{"rep_meta":[{"qualifier":"_DAUGHTER_","vlen":32,"tag":[],"timestamp":9223372036854775807}]}} 2016-12-02 15:29:21,211 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(207): Started replicating mutations. 2016-12-02 15:29:21,215 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(211): Finished replicating mutations. 2016-12-02 15:29:21,325 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {6ef423c0591830e60ab18a766b7caf14 state=MERGING, ts=1480721360572, server=10.10.9.179,52460,1480721340350} to {6ef423c0591830e60ab18a766b7caf14 state=MERGED, ts=1480721361325, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:21,325 INFO [ProcedureExecutorWorker-6] master.RegionStates(604): Offlined 6ef423c0591830e60ab18a766b7caf14 from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:21,325 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {77b3f337f846c19e5ea9c885289510ac state=MERGING, ts=1480721360574, server=10.10.9.179,52460,1480721340350} to {77b3f337f846c19e5ea9c885289510ac state=MERGED, ts=1480721361325, server=10.10.9.179,52460,1480721340350} 2016-12-02 15:29:21,325 INFO [ProcedureExecutorWorker-6] master.RegionStates(604): Offlined 77b3f337f846c19e5ea9c885289510ac from 10.10.9.179,52460,1480721340350 2016-12-02 15:29:21,325 INFO [ProcedureExecutorWorker-6] master.RegionStates(1139): Transition {0115985df04bcc343330799dd037ce66 state=MERGING_NEW, ts=1480721360578, server=10.10.9.179,52460,1480721340350} to {0115985df04bcc343330799dd037ce66 state=OFFLINE, ts=1480721361325, server=null} 2016-12-02 15:29:21,329 DEBUG [AM.-pool3-t5] balancer.RegionLocationFinder(288): HDFSBlocksDistribution not found in cache for region testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,331 DEBUG [AM.-pool3-t5] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,332 DEBUG [AM.-pool3-t5] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,333 DEBUG [AM.-pool3-t5] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,334 DEBUG [AM.-pool3-t5] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,334 DEBUG [AM.-pool3-t5] master.AssignmentManager(1321): No previous transition plan found (or ignoring an existing plan) for testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.; generated random plan=hri=testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66., src=, dest=10.10.9.179,52464,1480721340388; 11 (online=11) available servers, forceNewPlan=false 2016-12-02 15:29:21,335 INFO [AM.-pool3-t5] master.AssignmentManager(1105): Assigning testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. to 10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,335 INFO [AM.-pool3-t5] master.RegionStates(1139): Transition {0115985df04bcc343330799dd037ce66 state=OFFLINE, ts=1480721361325, server=null} to {0115985df04bcc343330799dd037ce66 state=PENDING_OPEN, ts=1480721361335, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:21,335 INFO [AM.-pool3-t5] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. with state=PENDING_OPEN, sn=10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,336 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52464] regionserver.RSRpcServices(1772): Open testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,340 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.HRegion(6583): Opening region: {ENCODED => 0115985df04bcc343330799dd037ce66, NAME => 'testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.', STARTKEY => '', ENDKEY => ''} 2016-12-02 15:29:21,340 INFO [RS_OPEN_REGION-10.10.9.179:52464-1] coprocessor.CoprocessorHost(162): System coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter was loaded successfully with priority (536870911). 2016-12-02 15:29:21,340 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.MetricsRegionSourceImpl(74): Creating new MetricsRegionSourceImpl for table testRegionMerge 0115985df04bcc343330799dd037ce66 2016-12-02 15:29:21,341 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.HRegion(743): Instantiated testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,341 WARN [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.HRegionFileSystem(823): .regioninfo file not found for region: 0115985df04bcc343330799dd037ce66 on table testRegionMerge 2016-12-02 15:29:21,348 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW]]} size 0 2016-12-02 15:29:21,349 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|FINALIZED]]} size 0 2016-12-02 15:29:21,351 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741852_1028 size 50 2016-12-02 15:29:21,352 INFO [StoreOpener-0115985df04bcc343330799dd037ce66-1] regionserver.HStore(252): Memstore class name is org.apache.hadoop.hbase.regionserver.DefaultMemStore 2016-12-02 15:29:21,352 INFO [StoreOpener-0115985df04bcc343330799dd037ce66-1] hfile.CacheConfig(256): Created cacheConfig for f: blockCache=LruBlockCache{blockCount=0, currentSize=765632, freeSize=1043196672, maxSize=1043962304, heapSize=765632, minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2016-12-02 15:29:21,353 INFO [StoreOpener-0115985df04bcc343330799dd037ce66-1] compactions.CompactionConfiguration(145): size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory 2016-12-02 15:29:21,355 DEBUG [StoreOpener-0115985df04bcc343330799dd037ce66-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,357 DEBUG [StoreOpener-0115985df04bcc343330799dd037ce66-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,357 DEBUG [StoreFileOpenerThread-f-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,359 DEBUG [StoreFileOpenerThread-f-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,381 DEBUG [StoreOpener-0115985df04bcc343330799dd037ce66-1] regionserver.HStore(532): loaded hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac, isReference=true, isBulkLoadResult=false, seqid=9, majorCompaction=false 2016-12-02 15:29:21,381 DEBUG [StoreFileOpenerThread-f-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,381 DEBUG [StoreFileOpenerThread-f-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,385 DEBUG [StoreOpener-0115985df04bcc343330799dd037ce66-1] regionserver.HStore(532): loaded hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14, isReference=true, isBulkLoadResult=false, seqid=10, majorCompaction=false 2016-12-02 15:29:21,385 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.HRegion(4058): Found 0 recovered edits file(s) under hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66 2016-12-02 15:29:21,388 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/recovered.edits/11.seqid to file, newSeqId=11, maxSeqId=0 2016-12-02 15:29:21,388 INFO [RS_OPEN_REGION-10.10.9.179:52464-1] regionserver.HRegion(893): Onlined 0115985df04bcc343330799dd037ce66; next sequenceid=11 2016-12-02 15:29:21,389 INFO [PostOpenDeployTasks:0115985df04bcc343330799dd037ce66] regionserver.HRegionServer(1995): Post open deploy tasks for testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,390 DEBUG [PostOpenDeployTasks:0115985df04bcc343330799dd037ce66] regionserver.CompactSplitThread(361): Small Compaction requested: system; Because: Opening Region; compaction_queue=(0:0), split_queue=0, merge_queue=0 2016-12-02 15:29:21,392 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] compactions.SortedCompactionPolicy(68): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 10 blocking 2016-12-02 15:29:21,392 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,392 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.AssignmentManager(2949): Got transition OPENED for {0115985df04bcc343330799dd037ce66 state=PENDING_OPEN, ts=1480721361335, server=10.10.9.179,52464,1480721340388} from 10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,392 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(1139): Transition {0115985df04bcc343330799dd037ce66 state=PENDING_OPEN, ts=1480721361335, server=10.10.9.179,52464,1480721340388} to {0115985df04bcc343330799dd037ce66 state=OPEN, ts=1480721361392, server=10.10.9.179,52464,1480721340388} 2016-12-02 15:29:21,392 INFO [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStateStore(208): Updating hbase:meta row testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. with state=OPEN, openSeqNum=11, server=10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,392 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,396 DEBUG [RpcServer.deafult.FPBQ.Fifo.handler=0,queue=0,port=52448] master.RegionStates(466): Onlined 0115985df04bcc343330799dd037ce66 on 10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,397 DEBUG [PostOpenDeployTasks:0115985df04bcc343330799dd037ce66] regionserver.HRegionServer(2022): Finished post open deploy task for testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,399 DEBUG [RS_OPEN_REGION-10.10.9.179:52464-1] handler.OpenRegionHandler(126): Opened testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. on 10.10.9.179,52464,1480721340388 2016-12-02 15:29:21,399 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HStore(1659): 0115985df04bcc343330799dd037ce66 - f: Initiating minor compaction (all files) 2016-12-02 15:29:21,399 INFO [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HRegion(2029): Starting compaction on f in region testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:21,399 INFO [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HStore(1262): Starting compaction of 2 file(s) in f of testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. into tmpdir=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/.tmp, totalSize=9.6 K 2016-12-02 15:29:21,400 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,400 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,400 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] compactions.Compactor(189): Compacting hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac-hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/f/7d32bddca91e4f98abab98f3a0fb587e-top, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=9, earliestPutTs=1480721359933 2016-12-02 15:29:21,401 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,401 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,401 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] compactions.Compactor(189): Compacting hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14-hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/f/b9d9a5aa40c343a1b265b67fc35cef21-top, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=10, earliestPutTs=1480721359950 2016-12-02 15:29:21,402 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,403 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:21,405 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,406 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:21,413 INFO [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] throttle.PressureAwareThroughputController(153): testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.#f#compaction#2 average throughput is 0.22 MB/sec, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 10.00 MB/sec 2016-12-02 15:29:21,419 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW]]} size 0 2016-12-02 15:29:21,420 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:21,421 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741853_1029 size 5069 2016-12-02 15:29:21,424 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/.tmp/68c5939b78bb469f80b5c79147695b2b as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/68c5939b78bb469f80b5c79147695b2b 2016-12-02 15:29:21,433 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HStore(1764): Completing compaction... 2016-12-02 15:29:21,433 INFO [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.HStore(1409): Completed compaction of 2 (all) file(s) in f of testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. into 68c5939b78bb469f80b5c79147695b2b(size=5.0 K), total size for store is 5.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2016-12-02 15:29:21,435 INFO [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.CompactSplitThread$CompactionRunner(528): Completed compaction: Request = regionName=testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66., storeName=f, fileCount=2, fileSize=9.6 K (4.8 K, 4.8 K), priority=8, time=94092338110306; duration=0sec 2016-12-02 15:29:21,436 DEBUG [RS:3;10.10.9.179:52464-shortCompactions-1480721361390] regionserver.CompactSplitThread$CompactionRunner(553): CompactSplitThread Status: compaction_queue=(0:0), split_queue=0, merge_queue=0 2016-12-02 15:29:21,547 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:21,547 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:21,547 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 5 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721421547 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:21,548 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:21,612 DEBUG [ProcedureExecutorWorker-6] lock.ZKInterProcessLockBase(328): Released /1/table-lock/testRegionMerge/read-master:524480000000001 2016-12-02 15:29:21,613 DEBUG [ProcedureExecutorWorker-6] procedure2.ProcedureExecutor(987): Procedure completed in 1.5190sec: MergeTableRegionsProcedure (table=testRegionMerge regions=[testRegionMerge,,1480721358622.77b3f337f846c19e5ea9c885289510ac., testRegionMerge,r50,1480721358622.6ef423c0591830e60ab18a766b7caf14. ] forcible=true) id=6 owner=tyu state=FINISHED 2016-12-02 15:29:21,692 DEBUG [hconnection-0x63884e4-shared-pool55-t3] ipc.RpcConnection(133): Use SIMPLE authentication for service ClientService, sasl=false 2016-12-02 15:29:21,692 DEBUG [hconnection-0x63884e4-shared-pool55-t3] ipc.NettyRpcConnection(254): Connecting to /10.10.9.179:52893 2016-12-02 15:29:21,693 DEBUG [RpcServer.listener,port=52893] ipc.RpcServer$ConnectionManager(3121): Server connection from 10.10.9.179:54514; connections=4, queued calls size (bytes)=0, general queued calls=0, priority queued calls=0 2016-12-02 15:29:21,693 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1936): Auth successful for tyu (auth:SIMPLE) 2016-12-02 15:29:21,693 INFO [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$Connection(1966): Connection from 10.10.9.179 port: 54514 with version info: version: "2.0.0-SNAPSHOT" url: "git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: "7775feda05b0db63178c81910946adfec4c4c41f" user: "tyu" date: "Fri Dec 2 15:27:41 PST 2016" src_checksum: "659b5c3cf18852b131d2d9a46f650d84" version_major: 2 version_minor: 0 2016-12-02 15:29:21,697 INFO [main] replication.TestSerialReplication(301): [10, 20] 2016-12-02 15:29:21,697 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:21,697 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 2 2016-12-02 15:29:21,903 INFO [main] replication.TestSerialReplication(301): [10, 20] 2016-12-02 15:29:21,903 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:21,903 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 2 2016-12-02 15:29:22,109 INFO [main] replication.TestSerialReplication(301): [10, 20] 2016-12-02 15:29:22,109 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:22,109 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 2 2016-12-02 15:29:22,163 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:22,164 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:22,164 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 6 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721422163 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:22,165 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:22,316 INFO [main] replication.TestSerialReplication(301): [10, 20] 2016-12-02 15:29:22,316 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:22,316 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 2 2016-12-02 15:29:22,413 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(207): Started replicating mutations. 2016-12-02 15:29:22,416 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(211): Finished replicating mutations. 2016-12-02 15:29:22,525 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:22,525 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:22,525 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:22,731 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:22,731 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:22,732 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:22,875 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:22,876 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:22,876 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 7 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721422875 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:22,878 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:22,937 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:22,938 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:22,938 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:23,144 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:23,145 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:23,145 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:23,350 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:23,350 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:23,350 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:23,556 INFO [main] replication.TestSerialReplication(301): [10, 20, 30] 2016-12-02 15:29:23,557 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:23,557 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 3 2016-12-02 15:29:23,616 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(207): Started replicating mutations. 2016-12-02 15:29:23,618 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(211): Finished replicating mutations. 2016-12-02 15:29:23,688 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:23,688 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:23,689 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 8 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721423688 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:23,689 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:23,764 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:23,764 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:23,764 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:23,971 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:23,971 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:23,972 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:24,176 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:24,177 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:24,177 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:24,380 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:24,381 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:24,381 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:24,585 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:24,586 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:24,586 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:24,597 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2623): Caught a ServiceException with null cause org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:24,597 ERROR [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.RpcServer(2634): Unexpected throwable object org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:24,597 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 9 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721424597 java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more 2016-12-02 15:29:24,598 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Replication services are not initialized yet at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2635) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) Caused by: org.apache.hadoop.hbase.shaded.com.google.protobuf.ServiceException: Replication services are not initialized yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2039) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) ... 3 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:24,603 WARN [HBase-Metrics2-1] impl.MetricsConfig(125): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2016-12-02 15:29:24,795 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:24,796 INFO [main] replication.TestSerialReplication(302): [] 2016-12-02 15:29:24,796 INFO [main] replication.TestSerialReplication(310): Waiting all logs pushed to slave. Expected 18 , actual 4 2016-12-02 15:29:24,838 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(207): Started replicating mutations. 2016-12-02 15:29:24,840 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] regionserver.ReplicationSink(211): Finished replicating mutations. 2016-12-02 15:29:25,002 INFO [main] replication.TestSerialReplication(301): [10, 20, 30, 40] 2016-12-02 15:29:25,002 INFO [main] replication.TestSerialReplication(302): [11] 2016-12-02 15:29:25,117 INFO [main] hbase.ResourceChecker(172): after: replication.TestSerialReplication#testRegionMerge Thread=1319 (was 1264) Potentially hanging thread: pool-283-thread-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t35 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_414241748_1 at /127.0.0.1:54539 [Waiting for operation #35] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1349508599_1 at /127.0.0.1:54764 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1720499883_1 at /127.0.0.1:54717 [Waiting for operation #11] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893-SendThread(localhost:60648) sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) Potentially hanging thread: pool-288-thread-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52460-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52464-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x57479bf4-shared-pool39-t6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-485384104_1 at /127.0.0.1:54392 [Waiting for operation #36] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x57479bf4-shared-pool39-t5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1403560136_1 at /127.0.0.1:54745 [Waiting for operation #7] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-485384104_1 at /127.0.0.1:54669 [Waiting for operation #12] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_124300399_1 at /127.0.0.1:54053 [Waiting for operation #83] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1195110250_1 at /127.0.0.1:54105 [Waiting for operation #52] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Default-IPC-NioEventLoopGroup-1-15 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:684) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:344) io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Parameter Sending Thread #7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t34 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Parameter Sending Thread #9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Parameter Sending Thread #8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t33 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_CLOSE_REGION-10.10.9.179:52464-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_124300399_1 at /127.0.0.1:54773 [Waiting for operation #3] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS:3;10.10.9.179:52464-shortCompactions-1480721361390 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1111021780_1 at /127.0.0.1:54690 [Waiting for operation #13] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x57479bf4-shared-pool39-t3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_124300399_1 at /127.0.0.1:54727 [Waiting for operation #15] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1403560136_1 at /127.0.0.1:54718 [Waiting for operation #9] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52473-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-288-thread-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52464-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x7f7f48a4-shared-pool59-t3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_414241748_1 at /127.0.0.1:54129 [Waiting for operation #99] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52893-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-10.10.9.179:52460-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS:5;10.10.9.179:52473-splits-1480721358614 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x7f7f48a4-shared-pool59-t5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1403560136_1 at /127.0.0.1:54681 [Waiting for operation #17] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x7f7f48a4-shared-pool59-t4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t32 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t31 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1111021780_1 at /127.0.0.1:54691 [Waiting for operation #11] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1195110250_1 at /127.0.0.1:54286 [Waiting for operation #26] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.-pool3-t6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-485384104_1 at /127.0.0.1:54671 [Waiting for operation #11] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Default-IPC-NioEventLoopGroup-1-16 sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:684) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:344) io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x57479bf4-shared-pool39-t4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1195110250_1 at /127.0.0.1:54287 [Waiting for operation #20] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-485384104_1 at /127.0.0.1:54320 [Waiting for operation #46] sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t36 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: pool-283-thread-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: hconnection-0x63884e4-shared-pool55-t13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=2745 (was 2704) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), SystemLoadAverage=533 (was 455) - SystemLoadAverage LEAK? -, ProcessCount=284 (was 284), AvailableMemoryMB=2149 (was 2285) 2016-12-02 15:29:25,118 WARN [main] hbase.ResourceChecker(135): Thread=1319 is superior to 500 2016-12-02 15:29:25,119 WARN [main] hbase.ResourceChecker(135): OpenFileDescriptor=2745 is superior to 1024 2016-12-02 15:29:25,119 INFO [main] hbase.HBaseTestingUtility(1162): Shutting down minicluster 2016-12-02 15:29:25,119 INFO [main] client.ConnectionImplementation(1652): Closing master protocol: MasterService 2016-12-02 15:29:25,121 INFO [main] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0044 2016-12-02 15:29:25,123 DEBUG [main] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:25,124 DEBUG [main] util.JVMClusterUtil(246): Shutting down HBase Cluster 2016-12-02 15:29:25,124 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52887,1480721345911' ***** 2016-12-02 15:29:25,124 INFO [main] regionserver.HRegionServer(1961): STOPPED: Cluster shutdown requested 2016-12-02 15:29:25,124 INFO [M:0;10.10.9.179:52887] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:25,124 INFO [SplitLogWorker-10.10.9.179:52887] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:25,124 INFO [SplitLogWorker-10.10.9.179:52887] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52887,1480721345911 exiting 2016-12-02 15:29:25,124 INFO [M:0;10.10.9.179:52887] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:25,125 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-12-02 15:29:25,125 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52893,1480721345952' ***** 2016-12-02 15:29:25,125 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:25,125 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:25,125 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2016-12-02 15:29:25,125 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52887: disconnecting client 10.10.9.179:53287. Number of active connections: 4 2016-12-02 15:29:25,125 INFO [RS:0;10.10.9.179:52893] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:25,125 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:25,125 INFO [SplitLogWorker-10.10.9.179:52893] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:25,125 INFO [SplitLogWorker-10.10.9.179:52893] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52893,1480721345952 exiting 2016-12-02 15:29:25,125 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52887: disconnecting client 10.10.9.179:53452. Number of active connections: 5 2016-12-02 15:29:25,125 INFO [M:0;10.10.9.179:52887] procedure2.ProcedureExecutor(544): Stopping the procedure executor 2016-12-02 15:29:25,125 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-12-02 15:29:25,132 INFO [M:0;10.10.9.179:52887] wal.WALProcedureStore(235): Stopping the WAL Procedure Store, isAbort=false 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-8] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-8,5,ProcedureExecutor] 2016-12-02 15:29:25,125 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52893: disconnecting client 10.10.9.179:54514. Number of active connections: 3 2016-12-02 15:29:25,125 INFO [RS:0;10.10.9.179:52893] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:25,125 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/running 2016-12-02 15:29:25,132 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-3] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-3,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-4] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-4,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-1,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-2] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-2,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-7] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-7,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-6] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-6,5,ProcedureExecutor] 2016-12-02 15:29:25,132 DEBUG [ProcedureExecutorWorker-5] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-5,5,ProcedureExecutor] 2016-12-02 15:29:25,132 INFO [RS:0;10.10.9.179:52893] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:25,132 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:25,133 INFO [RS:0;10.10.9.179:52893] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:25,133 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52893,1480721345952 2016-12-02 15:29:25,133 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] handler.CloseRegionHandler(90): Processing close of testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:25,133 DEBUG [RS:0;10.10.9.179:52893] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:25,135 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegion(1486): Closing testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.: disabling compactions & flushes 2016-12-02 15:29:25,135 INFO [RS:0;10.10.9.179:52893] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b003f 2016-12-02 15:29:25,135 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:25,135 INFO [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=130 B 2016-12-02 15:29:25,135 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:25,137 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1320): Waiting on 1 regions to close 2016-12-02 15:29:25,137 DEBUG [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1324): {cbf52d9c9c92e7bbb1af49b6d521d080=testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.} 2016-12-02 15:29:25,140 INFO [master//10.10.9.179:0.leaseChecker] regionserver.Leases(147): master//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:25,140 INFO [master//10.10.9.179:0.leaseChecker] regionserver.Leases(150): master//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:25,157 INFO [IPC Server handler 5 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741837_1013{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 4953 2016-12-02 15:29:25,158 INFO [IPC Server handler 6 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 510 2016-12-02 15:29:25,159 INFO [M:0;10.10.9.179:52887] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:25,160 INFO [M:0;10.10.9.179:52887] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:25,160 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52887,1480721345911 2016-12-02 15:29:25,160 DEBUG [M:0;10.10.9.179:52887] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:25,160 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:25,160 INFO [M:0;10.10.9.179:52887] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b003e 2016-12-02 15:29:25,161 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegion(1486): Closing hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864.: disabling compactions & flushes 2016-12-02 15:29:25,162 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegion(1525): Updates disabled for region hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:25,162 INFO [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=78 B 2016-12-02 15:29:25,162 DEBUG [M:0;10.10.9.179:52887] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:25,164 INFO [M:0;10.10.9.179:52887] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:25,164 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52893: disconnecting client 10.10.9.179:53581. Number of active connections: 2 2016-12-02 15:29:25,164 INFO [M:0;10.10.9.179:52887] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:25,165 INFO [M:0;10.10.9.179:52887] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:25,165 INFO [M:0;10.10.9.179:52887] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:25,166 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1320): Waiting on 2 regions to close 2016-12-02 15:29:25,166 DEBUG [M:0;10.10.9.179:52887] regionserver.HRegionServer(1324): {09b1ecd6eda75f10b347b13abc2f2864=hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864., 1588230740=hbase:meta,,1.1588230740} 2016-12-02 15:29:25,166 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-12-02 15:29:25,167 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegion(1486): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-12-02 15:29:25,167 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegion(1525): Updates disabled for region hbase:meta,,1.1588230740 2016-12-02 15:29:25,167 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegion(2442): Flushing 5/5 column families, memstore=1.99 KB 2016-12-02 15:29:25,175 INFO [IPC Server handler 6 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741838_1014{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 4912 2016-12-02 15:29:25,176 INFO [IPC Server handler 3 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 0 2016-12-02 15:29:25,176 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=18, memsize=1.6 K, hasBloomFilter=false, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/a97d0569f97a4721b11f3fad4c90bab7 2016-12-02 15:29:25,186 INFO [IPC Server handler 0 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741840_1016{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 4802 2016-12-02 15:29:25,194 INFO [10.10.9.179,52887,1480721345911_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52887,1480721345911-MemstoreFlusherChore was stopped 2016-12-02 15:29:25,202 INFO [10.10.9.179,52893,1480721345952_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52893,1480721345952-MemstoreFlusherChore was stopped 2016-12-02 15:29:25,208 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:25,208 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:25,242 INFO [10.10.9.179,52887,1480721345911_splitLogManager__ChoreService_1] hbase.ScheduledChore(183): Chore: SplitLogManager Timeout Monitor was stopped 2016-12-02 15:29:25,563 INFO [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=9, memsize=130, hasBloomFilter=true, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/.tmp/38a15ff775904c6f8522679e24ca0244 2016-12-02 15:29:25,570 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/.tmp/38a15ff775904c6f8522679e24ca0244 as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/f/38a15ff775904c6f8522679e24ca0244 2016-12-02 15:29:25,574 INFO [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/f/38a15ff775904c6f8522679e24ca0244, entries=5, sequenceid=9, filesize=4.8 K 2016-12-02 15:29:25,575 INFO [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegion(2644): Finished memstore flush of ~130 B/130, currentsize=0 B/0 for region testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. in 440ms, sequenceid=9, compaction requested=false 2016-12-02 15:29:25,579 INFO [StoreCloserThread-testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:25,579 INFO [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=78, hasBloomFilter=true, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/.tmp/bfe40008222944b4bda1e809b07c5185 2016-12-02 15:29:25,587 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=18, memsize=79, hasBloomFilter=false, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/aede34cf46a3423ea4c901a064530d6f 2016-12-02 15:29:25,588 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/.tmp/bfe40008222944b4bda1e809b07c5185 as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/info/bfe40008222944b4bda1e809b07c5185 2016-12-02 15:29:25,588 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/default/testRegionMerge/cbf52d9c9c92e7bbb1af49b6d521d080/recovered.edits/12.seqid to file, newSeqId=12, maxSeqId=2 2016-12-02 15:29:25,588 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:25,590 INFO [RS_CLOSE_REGION-10.10.9.179:52893-0] regionserver.HRegion(1643): Closed testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:25,590 DEBUG [RS_CLOSE_REGION-10.10.9.179:52893-0] handler.CloseRegionHandler(122): Closed testRegionMerge,,1480721351353.cbf52d9c9c92e7bbb1af49b6d521d080. 2016-12-02 15:29:25,593 INFO [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/info/bfe40008222944b4bda1e809b07c5185, entries=2, sequenceid=6, filesize=4.8 K 2016-12-02 15:29:25,594 INFO [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegion(2644): Finished memstore flush of ~78 B/78, currentsize=0 B/0 for region hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. in 432ms, sequenceid=6, compaction requested=false 2016-12-02 15:29:25,597 INFO [StoreCloserThread-hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864.-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:25,600 INFO [IPC Server handler 5 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741841_1017{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 4809 2016-12-02 15:29:25,600 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/namespace/09b1ecd6eda75f10b347b13abc2f2864/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-12-02 15:29:25,602 INFO [RS_CLOSE_REGION-10.10.9.179:52887-0] regionserver.HRegion(1643): Closed hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:25,603 DEBUG [RS_CLOSE_REGION-10.10.9.179:52887-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1480721347666.09b1ecd6eda75f10b347b13abc2f2864. 2016-12-02 15:29:25,607 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 10 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721425607 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:25,607 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:25,674 INFO [10.10.9.179,52887,1480721345911_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52887,1480721345911-DoMetricsChore was stopped 2016-12-02 15:29:25,747 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52893,1480721345952; all regions closed. 2016-12-02 15:29:25,747 DEBUG [RS:0;10.10.9.179:52893] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:25,753 INFO [IPC Server handler 3 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741834_1010{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|RBW]]} size 1774 2016-12-02 15:29:25,813 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(51): Creating new MetricsTableSourceImpl for table 2016-12-02 15:29:25,924 DEBUG [RpcServer idle connection scanner for port 52887] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52887: task running 2016-12-02 15:29:25,961 DEBUG [RpcServer idle connection scanner for port 52893] ipc.RpcServer$ConnectionManager$1(3195): RpcServer idle connection scanner for port 52893: task running 2016-12-02 15:29:26,006 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=18, memsize=86, hasBloomFilter=false, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/2c6c2e417d3048e5baa60341d6b2bd74 2016-12-02 15:29:26,020 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893: callId: 9 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54451 deadline: 1480721426020 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:26,021 INFO [IPC Server handler 9 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-a21ac871-28b7-41f1-89ea-18b8ea95e060:NORMAL:127.0.0.1:52790|FINALIZED]]} size 0 2016-12-02 15:29:26,021 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:26,026 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=18, memsize=272, hasBloomFilter=false, into tmp file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/2df4e518e29945f6bfd3efd0cf2b6886 2016-12-02 15:29:26,029 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/a97d0569f97a4721b11f3fad4c90bab7 as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/info/a97d0569f97a4721b11f3fad4c90bab7 2016-12-02 15:29:26,032 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/info/a97d0569f97a4721b11f3fad4c90bab7, entries=14, sequenceid=18, filesize=6.2 K 2016-12-02 15:29:26,032 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/aede34cf46a3423ea4c901a064530d6f as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/rep_barrier/aede34cf46a3423ea4c901a064530d6f 2016-12-02 15:29:26,035 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/rep_barrier/aede34cf46a3423ea4c901a064530d6f, entries=1, sequenceid=18, filesize=4.7 K 2016-12-02 15:29:26,036 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/2c6c2e417d3048e5baa60341d6b2bd74 as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/rep_meta/2c6c2e417d3048e5baa60341d6b2bd74 2016-12-02 15:29:26,040 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/rep_meta/2c6c2e417d3048e5baa60341d6b2bd74, entries=1, sequenceid=18, filesize=4.7 K 2016-12-02 15:29:26,041 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/.tmp/2df4e518e29945f6bfd3efd0cf2b6886 as hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/table/2df4e518e29945f6bfd3efd0cf2b6886 2016-12-02 15:29:26,044 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HStore(970): Added hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/table/2df4e518e29945f6bfd3efd0cf2b6886, entries=6, sequenceid=18, filesize=4.8 K 2016-12-02 15:29:26,045 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegion(2644): Finished memstore flush of ~1.99 KB/2037, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 878ms, sequenceid=18, compaction requested=false 2016-12-02 15:29:26,049 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:26,051 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_barrier 2016-12-02 15:29:26,054 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_meta 2016-12-02 15:29:26,054 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_position 2016-12-02 15:29:26,056 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed table 2016-12-02 15:29:26,060 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52767/user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/data/hbase/meta/1588230740/recovered.edits/21.seqid to file, newSeqId=21, maxSeqId=3 2016-12-02 15:29:26,061 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-12-02 15:29:26,062 INFO [RS_CLOSE_META-10.10.9.179:52887-0] regionserver.HRegion(1643): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:26,063 DEBUG [RS_CLOSE_META-10.10.9.179:52887-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:26,128 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52893: callId: 10 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54451 deadline: 1480721426128 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:26,129 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52893,1480721345952 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:26,163 DEBUG [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:26,163 INFO [RS:0;10.10.9.179:52893] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52893%2C1480721345952:(num 1480721349337) 2016-12-02 15:29:26,163 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:26,163 INFO [RS:0;10.10.9.179:52893] regionserver.Leases(147): RS:0;10.10.9.179:52893 closing leases 2016-12-02 15:29:26,163 INFO [RS:0;10.10.9.179:52893] regionserver.Leases(150): RS:0;10.10.9.179:52893 closed leases 2016-12-02 15:29:26,164 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52887: disconnecting client 10.10.9.179:53037. Number of active connections: 3 2016-12-02 15:29:26,164 INFO [RS:0;10.10.9.179:52893] hbase.ChoreService(328): Chore service for: 10.10.9.179,52893,1480721345952 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52893,1480721345952 Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:26,164 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:26,164 INFO [RS:0;10.10.9.179:52893] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:26,164 INFO [RS:0;10.10.9.179:52893] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:26,165 INFO [RS:0;10.10.9.179:52893] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:26,165 INFO [RS:0;10.10.9.179:52893] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:26,165 INFO [RS:0;10.10.9.179:52893] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0046 2016-12-02 15:29:26,167 DEBUG [RS:0;10.10.9.179:52893] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:26,168 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52887: disconnecting client 10.10.9.179:54453. Number of active connections: 2 2016-12-02 15:29:26,170 INFO [RS:0;10.10.9.179:52893] ipc.RpcServer(2684): Stopping server on 52893 2016-12-02 15:29:26,170 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52893] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52893: disconnecting client 10.10.9.179:54454. Number of active connections: 1 2016-12-02 15:29:26,170 INFO [RpcServer.listener,port=52893] ipc.RpcServer$Listener(927): RpcServer.listener,port=52893: stopping 2016-12-02 15:29:26,171 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:26,171 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:26,171 DEBUG [RpcServer.listener,port=52893] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52893: disconnecting client 10.10.9.179:54451. Number of active connections: 0 2016-12-02 15:29:26,172 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:26,172 DEBUG [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 1-0x158c1de825b0039, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:26,172 DEBUG [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 1-0x158c1de825b0036, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:26,172 INFO [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] replication.HBaseReplicationEndpoint$PeerRegionServerListener(218): Detected change to peer region servers, fetching updated list 2016-12-02 15:29:26,172 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.10.9.179,52893,1480721345952 2016-12-02 15:29:26,172 INFO [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] replication.HBaseReplicationEndpoint$PeerRegionServerListener(218): Detected change to peer region servers, fetching updated list 2016-12-02 15:29:26,172 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52893,1480721345952] 2016-12-02 15:29:26,172 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52893-0x158c1de825b003d, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:26,172 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52893,1480721345952; zookeeper connection closed. 2016-12-02 15:29:26,176 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52893,1480721345952 expired; onlineServers=1 2016-12-02 15:29:26,176 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:26,176 INFO [RS:0;10.10.9.179:52893] regionserver.HRegionServer(1166): RS:0;10.10.9.179:52893 exiting 2016-12-02 15:29:26,177 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6d4a23c7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6d4a23c7 2016-12-02 15:29:26,177 INFO [main] util.JVMClusterUtil(331): Shutdown of 1 master(s) and 1 regionserver(s) complete 2016-12-02 15:29:26,177 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52887,1480721345911; all regions closed. 2016-12-02 15:29:26,177 DEBUG [M:0;10.10.9.179:52887] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:26,181 INFO [IPC Server handler 5 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 3830 2016-12-02 15:29:26,339 INFO [pool-288-thread-2] regionserver.ReplicationSinkManager(114): Current list of sinks is out of date or empty, updating 2016-12-02 15:29:26,341 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 11 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721426341 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:26,342 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:26,587 DEBUG [M:0;10.10.9.179:52887] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:26,587 INFO [M:0;10.10.9.179:52887] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52887%2C1480721345911.meta:.meta(num 1480721347363) 2016-12-02 15:29:26,587 DEBUG [M:0;10.10.9.179:52887] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/WALs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:26,592 INFO [IPC Server handler 1 on 52767] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52790 is added to blk_1073741833_1009{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9f8d1e89-5440-4f07-86c5-852a9e7ddddc:NORMAL:127.0.0.1:52790|RBW]]} size 1383 2016-12-02 15:29:26,647 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 12 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54357 deadline: 1480721426647 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:26,648 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:26,715 INFO [pool-283-thread-2] regionserver.ReplicationSinkManager(114): Current list of sinks is out of date or empty, updating 2016-12-02 15:29:26,716 DEBUG [RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887] ipc.CallRunner(127): RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=52887: callId: 11 service: AdminService methodName: ReplicateWALEntry size: 341 connection: 10.10.9.179:54354 deadline: 1480721426716 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) 2016-12-02 15:29:26,716 WARN [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:273) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:260) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:72) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:379) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException): org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 10.10.9.179,52887,1480721345911 stopping at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1279) at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2027) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26515) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2584) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:121) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:159) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:189) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) ... 1 more 2016-12-02 15:29:27,000 DEBUG [M:0;10.10.9.179:52887] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/514d2b0c-7375-4a94-9b4c-5cfc4117c547/oldWALs 2016-12-02 15:29:27,000 INFO [M:0;10.10.9.179:52887] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52887%2C1480721345911:(num 1480721348179) 2016-12-02 15:29:27,001 DEBUG [M:0;10.10.9.179:52887] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,001 INFO [M:0;10.10.9.179:52887] regionserver.Leases(147): M:0;10.10.9.179:52887 closing leases 2016-12-02 15:29:27,001 INFO [M:0;10.10.9.179:52887] regionserver.Leases(150): M:0;10.10.9.179:52887 closed leases 2016-12-02 15:29:27,001 INFO [M:0;10.10.9.179:52887] hbase.ChoreService(328): Chore service for: 10.10.9.179,52887,1480721345911 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52887,1480721345911 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52887,1480721345911-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: ReplicationMetaCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52887,1480721345911-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52887,1480721345911-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52887,1480721345911-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: 10.10.9.179,52887,1480721345911-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-10.10.9.179:52887 Period: 300000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,004 INFO [master//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,004 INFO [M:0;10.10.9.179:52887] master.MasterMobCompactionThread(174): Waiting for Mob Compaction Thread to finish... 2016-12-02 15:29:27,004 INFO [M:0;10.10.9.179:52887] master.MasterMobCompactionThread(174): Waiting for Region Server Mob Compaction Thread to finish... 2016-12-02 15:29:27,004 DEBUG [M:0;10.10.9.179:52887] master.HMaster(1026): Stopping service threads 2016-12-02 15:29:27,009 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2016-12-02 15:29:27,009 INFO [M:0;10.10.9.179:52887] hbase.ChoreService(328): Chore service for: 10.10.9.179,52887,1480721345911_splitLogManager_ had [] on shutdown 2016-12-02 15:29:27,009 INFO [M:0;10.10.9.179:52887] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-12-02 15:29:27,009 INFO [M:0;10.10.9.179:52887] ipc.RpcServer(2684): Stopping server on 52887 2016-12-02 15:29:27,009 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Set watcher on znode that does not yet exist, /2/master 2016-12-02 15:29:27,009 INFO [RpcServer.listener,port=52887] ipc.RpcServer$Listener(927): RpcServer.listener,port=52887: stopping 2016-12-02 15:29:27,009 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,009 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,009 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52887: disconnecting client 10.10.9.179:54357. Number of active connections: 1 2016-12-02 15:29:27,010 DEBUG [RpcServer.listener,port=52887] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52887: disconnecting client 10.10.9.179:54354. Number of active connections: 0 2016-12-02 15:29:27,010 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52887-0x158c1de825b003c, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/10.10.9.179,52887,1480721345911 2016-12-02 15:29:27,010 DEBUG [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 1-0x158c1de825b0039, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:27,010 INFO [RS:3;10.10.9.179:52464.replicationSource,1-EventThread] replication.HBaseReplicationEndpoint$PeerRegionServerListener(218): Detected change to peer region servers, fetching updated list 2016-12-02 15:29:27,010 DEBUG [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] zookeeper.ZooKeeperWatcher(466): connection to cluster: 1-0x158c1de825b0036, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2016-12-02 15:29:27,010 INFO [RS:2;10.10.9.179:52460.replicationSource,1-EventThread] replication.HBaseReplicationEndpoint$PeerRegionServerListener(218): Detected change to peer region servers, fetching updated list 2016-12-02 15:29:27,010 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52887,1480721345911] 2016-12-02 15:29:27,010 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52887,1480721345911; zookeeper connection closed. 2016-12-02 15:29:27,011 INFO [M:0;10.10.9.179:52887] regionserver.HRegionServer(1166): M:0;10.10.9.179:52887 exiting 2016-12-02 15:29:27,011 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,023 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,056 INFO [pool-288-thread-4] regionserver.ReplicationSinkManager(114): Current list of sinks is out of date or empty, updating 2016-12-02 15:29:27,057 WARN [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate because of a local or network error: java.io.IOException: No replication sinks are available at org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.getReplicationSink(ReplicationSinkManager.java:119) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:377) at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:364) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2016-12-02 15:29:27,129 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/dfscluster_2ad9746f-3cc9-4584-80af-0ebe43f401db/dfs/data/data1/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/dfscluster_2ad9746f-3cc9-4584-80af-0ebe43f401db/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:52767] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1095361287-10.10.9.179-1480721344603 (Datanode Uuid ab4dfd0f-70a9-4b5c-afb6-11b4a7964d13) service to localhost/127.0.0.1:52767 interrupted 2016-12-02 15:29:27,129 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/dfscluster_2ad9746f-3cc9-4584-80af-0ebe43f401db/dfs/data/data1/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/9e6cf885-4d11-481d-b3e2-b111790ca304/dfscluster_2ad9746f-3cc9-4584-80af-0ebe43f401db/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:52767] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1095361287-10.10.9.179-1480721344603 (Datanode Uuid ab4dfd0f-70a9-4b5c-afb6-11b4a7964d13) service to localhost/127.0.0.1:52767 2016-12-02 15:29:27,162 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,179 INFO [main] hbase.HBaseTestingUtility(1175): Minicluster is down 2016-12-02 15:29:27,179 INFO [main] hbase.HBaseTestingUtility(1162): Shutting down minicluster 2016-12-02 15:29:27,179 INFO [main] client.ConnectionImplementation(1652): Closing master protocol: MasterService 2016-12-02 15:29:27,179 INFO [main] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b003b 2016-12-02 15:29:27,179 DEBUG [main] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,180 DEBUG [main] util.JVMClusterUtil(246): Shutting down HBase Cluster 2016-12-02 15:29:27,180 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52448,1480721340079' ***** 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:53288. Number of active connections: 15 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52473: disconnecting client 10.10.9.179:54301. Number of active connections: 1 2016-12-02 15:29:27,180 INFO [main] regionserver.HRegionServer(1961): STOPPED: Cluster shutdown requested 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52460: disconnecting client 10.10.9.179:54343. Number of active connections: 1 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52464: disconnecting client 10.10.9.179:54342. Number of active connections: 1 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52473: disconnecting client 10.10.9.179:54200. Number of active connections: 2 2016-12-02 15:29:27,180 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52744. Number of active connections: 15 2016-12-02 15:29:27,182 INFO [M:0;10.10.9.179:52448] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,182 INFO [10.10.9.179,52448,1480721340079_ChoreService_1] hbase.ScheduledChore(183): Chore: CompactionChecker was stopped 2016-12-02 15:29:27,183 INFO [SplitLogWorker-10.10.9.179:52448] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,183 INFO [SplitLogWorker-10.10.9.179:52448] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52448,1480721340079 exiting 2016-12-02 15:29:27,183 INFO [M:0;10.10.9.179:52448] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 INFO [10.10.9.179,52448,1480721340079_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52448,1480721340079-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52450,1480721340274' ***** 2016-12-02 15:29:27,183 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 INFO [M:0;10.10.9.179:52448] procedure2.ProcedureExecutor(544): Stopping the procedure executor 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,186 DEBUG [ProcedureExecutorWorker-1] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-1,5,ProcedureExecutor] 2016-12-02 15:29:27,189 DEBUG [ProcedureExecutorWorker-6] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-6,5,ProcedureExecutor] 2016-12-02 15:29:27,183 INFO [RS:0;10.10.9.179:52450] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,183 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52454,1480721340310' ***** 2016-12-02 15:29:27,191 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,183 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,183 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,183 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2016-12-02 15:29:27,191 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,192 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,194 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,191 INFO [RS:1;10.10.9.179:52454] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,191 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52460,1480721340350' ***** 2016-12-02 15:29:27,197 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,189 INFO [SplitLogWorker-10.10.9.179:52450] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,197 INFO [SplitLogWorker-10.10.9.179:52450] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52450,1480721340274 exiting 2016-12-02 15:29:27,189 INFO [RS:0;10.10.9.179:52450] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,189 DEBUG [ProcedureExecutorWorker-5] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-5,5,ProcedureExecutor] 2016-12-02 15:29:27,197 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,189 DEBUG [ProcedureExecutorWorker-4] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-4,5,ProcedureExecutor] 2016-12-02 15:29:27,189 DEBUG [ProcedureExecutorWorker-3] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-3,5,ProcedureExecutor] 2016-12-02 15:29:27,186 DEBUG [ProcedureExecutorWorker-2] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-2,5,ProcedureExecutor] 2016-12-02 15:29:27,186 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,186 DEBUG [ProcedureExecutorWorker-8] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-8,5,ProcedureExecutor] 2016-12-02 15:29:27,184 DEBUG [ProcedureExecutorWorker-7] procedure2.ProcedureExecutor$WorkerThread(1425): worker thread terminated Thread[ProcedureExecutorWorker-7,5,ProcedureExecutor] 2016-12-02 15:29:27,184 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,184 INFO [M:0;10.10.9.179:52448] wal.WALProcedureStore(235): Stopping the WAL Procedure Store, isAbort=false 2016-12-02 15:29:27,202 INFO [10.10.9.179,52450,1480721340274_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52450,1480721340274-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,184 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,197 INFO [RS:0;10.10.9.179:52450] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,204 INFO [RS:0;10.10.9.179:52450] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,204 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,204 DEBUG [RS:0;10.10.9.179:52450] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,204 INFO [RS:0;10.10.9.179:52450] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0013 2016-12-02 15:29:27,197 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,197 INFO [RS:2;10.10.9.179:52460] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,197 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52464,1480721340388' ***** 2016-12-02 15:29:27,204 INFO [SplitLogWorker-10.10.9.179:52460] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,197 INFO [SplitLogWorker-10.10.9.179:52454] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,196 INFO [RS:1;10.10.9.179:52454] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,196 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,191 DEBUG [main-EventThread] zookeeper.ZKUtil(365): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/running 2016-12-02 15:29:27,205 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,205 INFO [RS:1;10.10.9.179:52454] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,205 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,205 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,205 INFO [SplitLogWorker-10.10.9.179:52454] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52454,1480721340310 exiting 2016-12-02 15:29:27,205 INFO [SplitLogWorker-10.10.9.179:52460] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52460,1480721340350 exiting 2016-12-02 15:29:27,204 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,204 INFO [RS:2;10.10.9.179:52460] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,211 INFO [RS:3;10.10.9.179:52464] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,211 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52467,1480721340421' ***** 2016-12-02 15:29:27,211 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52450,1480721340274; all regions closed. 2016-12-02 15:29:27,210 INFO [RS:1;10.10.9.179:52454] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,211 DEBUG [RS:0;10.10.9.179:52450] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,211 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,211 INFO [SplitLogWorker-10.10.9.179:52464] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,211 INFO [RS:3;10.10.9.179:52464] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,211 INFO [RS:2;10.10.9.179:52460] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,212 INFO [RS:2;10.10.9.179:52460] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,211 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,211 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,213 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,213 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,213 DEBUG [RS:2;10.10.9.179:52460] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,213 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,213 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,213 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,216 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,213 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,213 INFO [master//10.10.9.179:0.leaseChecker] regionserver.Leases(147): master//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,213 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,216 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,213 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,212 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,212 INFO [RS:3;10.10.9.179:52464] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,211 INFO [SplitLogWorker-10.10.9.179:52464] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52464,1480721340388 exiting 2016-12-02 15:29:27,211 INFO [RS:4;10.10.9.179:52467] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,218 INFO [RS:4;10.10.9.179:52467] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,211 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52473,1480721340476' ***** 2016-12-02 15:29:27,221 INFO [RS:4;10.10.9.179:52467] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,211 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,221 DEBUG [RS:1;10.10.9.179:52454] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,221 INFO [RS:4;10.10.9.179:52467] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,221 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,221 INFO [SplitLogWorker-10.10.9.179:52467] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,218 INFO [RS:3;10.10.9.179:52464] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,216 INFO [master//10.10.9.179:0.leaseChecker] regionserver.Leases(150): master//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,216 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,216 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,223 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52464,1480721340388 2016-12-02 15:29:27,223 DEBUG [RS:3;10.10.9.179:52464] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,223 INFO [RS:3;10.10.9.179:52464] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0019 2016-12-02 15:29:27,215 INFO [RS:2;10.10.9.179:52460] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0017 2016-12-02 15:29:27,213 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW]]} size 508 2016-12-02 15:29:27,223 INFO [10.10.9.179,52473,1480721340476_ChoreService_1] hbase.ScheduledChore(183): Chore: CompactionChecker was stopped 2016-12-02 15:29:27,224 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW]]} size 508 2016-12-02 15:29:27,224 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,224 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1320): Waiting on 1 regions to close 2016-12-02 15:29:27,224 DEBUG [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1324): {0115985df04bcc343330799dd037ce66=testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.} 2016-12-02 15:29:27,223 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,223 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] handler.CloseRegionHandler(90): Processing close of testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:27,221 INFO [SplitLogWorker-10.10.9.179:52467] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52467,1480721340421 exiting 2016-12-02 15:29:27,221 INFO [RS:5;10.10.9.179:52473] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,221 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52476,1480721340506' ***** 2016-12-02 15:29:27,221 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,224 DEBUG [RS:4;10.10.9.179:52467] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,224 INFO [RS:4;10.10.9.179:52467] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0012 2016-12-02 15:29:27,221 INFO [RS:1;10.10.9.179:52454] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0011 2016-12-02 15:29:27,224 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,224 INFO [SplitLogWorker-10.10.9.179:52473] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,224 INFO [RS:5;10.10.9.179:52473] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,225 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,225 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52467,1480721340421; all regions closed. 2016-12-02 15:29:27,225 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,225 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52454,1480721340310; all regions closed. 2016-12-02 15:29:27,224 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegion(1486): Closing testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.: disabling compactions & flushes 2016-12-02 15:29:27,224 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,224 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW]]} size 83 2016-12-02 15:29:27,224 INFO [10.10.9.179,52473,1480721340476_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52473,1480721340476-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,226 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:54420. Number of active connections: 14 2016-12-02 15:29:27,225 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52460,1480721340350; all regions closed. 2016-12-02 15:29:27,225 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegion(1525): Updates disabled for region testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:27,225 DEBUG [RS:1;10.10.9.179:52454] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,225 DEBUG [RS:4;10.10.9.179:52467] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,225 INFO [RS:5;10.10.9.179:52473] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,225 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,225 INFO [SplitLogWorker-10.10.9.179:52473] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52473,1480721340476 exiting 2016-12-02 15:29:27,225 INFO [RS:6;10.10.9.179:52476] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,225 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52479,1480721340539' ***** 2016-12-02 15:29:27,226 INFO [SplitLogWorker-10.10.9.179:52476] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,226 INFO [RS:6;10.10.9.179:52476] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,226 INFO [RS:5;10.10.9.179:52473] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,227 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,227 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,226 INFO [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=234 B 2016-12-02 15:29:27,226 DEBUG [RS:2;10.10.9.179:52460] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,226 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW]]} size 83 2016-12-02 15:29:27,227 DEBUG [RS:5;10.10.9.179:52473] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,231 INFO [RS:5;10.10.9.179:52473] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0018 2016-12-02 15:29:27,227 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,227 INFO [RS:6;10.10.9.179:52476] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,233 INFO [RS:6;10.10.9.179:52476] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,233 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,233 DEBUG [RS:6;10.10.9.179:52476] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,233 INFO [RS:6;10.10.9.179:52476] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b000f 2016-12-02 15:29:27,226 INFO [SplitLogWorker-10.10.9.179:52476] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52476,1480721340506 exiting 2016-12-02 15:29:27,226 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,234 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52482,1480721340569' ***** 2016-12-02 15:29:27,234 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,234 INFO [main] regionserver.HRegionServer(1955): ***** STOPPING region server '10.10.9.179,52485,1480721340604' ***** 2016-12-02 15:29:27,234 INFO [main] regionserver.HRegionServer(1961): STOPPED: Shutdown requested 2016-12-02 15:29:27,234 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW]]} size 83 2016-12-02 15:29:27,234 INFO [M:0;10.10.9.179:52448] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,234 INFO [M:0;10.10.9.179:52448] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,233 INFO [10.10.9.179,52448,1480721340079_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52448,1480721340079-DoMetricsChore was stopped 2016-12-02 15:29:27,236 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741830_1006 size 15897 2016-12-02 15:29:27,234 INFO [RS:9;10.10.9.179:52485] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,234 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,234 INFO [RS:8;10.10.9.179:52482] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,234 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,234 INFO [RS:7;10.10.9.179:52479] regionserver.SplitLogWorker(164): Sending interrupt to stop the worker thread 2016-12-02 15:29:27,239 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] handler.CloseRegionHandler(90): Processing close of hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:27,239 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52473,1480721340476; all regions closed. 2016-12-02 15:29:27,238 INFO [SplitLogWorker-10.10.9.179:52482] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,241 INFO [SplitLogWorker-10.10.9.179:52482] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52482,1480721340569 exiting 2016-12-02 15:29:27,238 INFO [RS:8;10.10.9.179:52482] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,241 INFO [RS:8;10.10.9.179:52482] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,241 INFO [RS:8;10.10.9.179:52482] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,241 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,241 DEBUG [RS:8;10.10.9.179:52482] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,241 INFO [RS:8;10.10.9.179:52482] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0014 2016-12-02 15:29:27,238 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW]]} size 83 2016-12-02 15:29:27,238 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52476,1480721340506; all regions closed. 2016-12-02 15:29:27,238 INFO [SplitLogWorker-10.10.9.179:52485] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,243 INFO [SplitLogWorker-10.10.9.179:52485] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52485,1480721340604 exiting 2016-12-02 15:29:27,238 INFO [RS:9;10.10.9.179:52485] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,243 INFO [RS:9;10.10.9.179:52485] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,243 INFO [RS:9;10.10.9.179:52485] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,243 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,243 DEBUG [RS:9;10.10.9.179:52485] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,243 INFO [RS:9;10.10.9.179:52485] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0016 2016-12-02 15:29:27,236 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52448,1480721340079 2016-12-02 15:29:27,244 DEBUG [M:0;10.10.9.179:52448] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,244 INFO [M:0;10.10.9.179:52448] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0015 2016-12-02 15:29:27,243 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,243 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,243 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,244 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,244 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52485,1480721340604; all regions closed. 2016-12-02 15:29:27,243 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW]]} size 83 2016-12-02 15:29:27,244 DEBUG [M:0;10.10.9.179:52448] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,244 INFO [M:0;10.10.9.179:52448] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,244 INFO [M:0;10.10.9.179:52448] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,244 INFO [M:0;10.10.9.179:52448] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,244 INFO [M:0;10.10.9.179:52448] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,243 DEBUG [RS:6;10.10.9.179:52476] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,241 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,241 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,241 DEBUG [RS:5;10.10.9.179:52473] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,240 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegion(1486): Closing hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02.: disabling compactions & flushes 2016-12-02 15:29:27,246 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegion(1525): Updates disabled for region hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:27,246 INFO [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegion(2442): Flushing 1/1 column families, memstore=78 B 2016-12-02 15:29:27,239 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:54253. Number of active connections: 13 2016-12-02 15:29:27,239 INFO [SplitLogWorker-10.10.9.179:52479] regionserver.SplitLogWorker(146): SplitLogWorker interrupted. Exiting. 2016-12-02 15:29:27,247 INFO [SplitLogWorker-10.10.9.179:52479] regionserver.SplitLogWorker(155): SplitLogWorker 10.10.9.179,52479,1480721340539 exiting 2016-12-02 15:29:27,239 INFO [RS:7;10.10.9.179:52479] regionserver.HeapMemoryManager(209): Stopping HeapMemoryTuner chore. 2016-12-02 15:29:27,246 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] handler.CloseRegionHandler(90): Processing close of hbase:meta,,1.1588230740 2016-12-02 15:29:27,247 INFO [RS:7;10.10.9.179:52479] flush.RegionServerFlushTableProcedureManager(115): Stopping region server flush procedure manager gracefully. 2016-12-02 15:29:27,247 INFO [RS:7;10.10.9.179:52479] snapshot.RegionServerSnapshotManager(136): Stopping RegionServerSnapshotManager gracefully. 2016-12-02 15:29:27,246 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW]]} size 83 2016-12-02 15:29:27,246 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1320): Waiting on 2 regions to close 2016-12-02 15:29:27,251 DEBUG [M:0;10.10.9.179:52448] regionserver.HRegionServer(1324): {5450cdacaee02275eb0f7d3bc71c5f02=hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02., 1588230740=hbase:meta,,1.1588230740} 2016-12-02 15:29:27,245 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52473] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52473: disconnecting client 10.10.9.179:53373. Number of active connections: 0 2016-12-02 15:29:27,245 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52460] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52460: disconnecting client 10.10.9.179:54322. Number of active connections: 0 2016-12-02 15:29:27,245 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52464] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52464: disconnecting client 10.10.9.179:54321. Number of active connections: 0 2016-12-02 15:29:27,253 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,253 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,244 DEBUG [RS:9;10.10.9.179:52485] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,244 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52482,1480721340569; all regions closed. 2016-12-02 15:29:27,251 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW]]} size 83 2016-12-02 15:29:27,249 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1091): stopping server 10.10.9.179,52479,1480721340539 2016-12-02 15:29:27,253 DEBUG [RS:7;10.10.9.179:52479] zookeeper.MetaTableLocator(618): Stopping MetaTableLocator 2016-12-02 15:29:27,254 INFO [RS:7;10.10.9.179:52479] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0010 2016-12-02 15:29:27,249 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegion(1486): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2016-12-02 15:29:27,254 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegion(1525): Updates disabled for region hbase:meta,,1.1588230740 2016-12-02 15:29:27,247 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.1 exiting 2016-12-02 15:29:27,255 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegion(2442): Flushing 5/5 column families, memstore=9.64 KB 2016-12-02 15:29:27,247 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(293): MemStoreFlusher.0 exiting 2016-12-02 15:29:27,257 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,258 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52479,1480721340539; all regions closed. 2016-12-02 15:29:27,258 DEBUG [RS:7;10.10.9.179:52479] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:27,253 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-edf5a725-c66c-4d3c-82b5-95b8b2671c7a:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|RBW]]} size 83 2016-12-02 15:29:27,253 DEBUG [RS:8;10.10.9.179:52482] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,260 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW]]} size 83 2016-12-02 15:29:27,261 INFO [10.10.9.179,52454,1480721340310_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52454,1480721340310-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,261 INFO [10.10.9.179,52460,1480721340350_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52460,1480721340350-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,261 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741837_1013 size 91 2016-12-02 15:29:27,265 INFO [10.10.9.179,52476,1480721340506_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52476,1480721340506-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,265 INFO [10.10.9.179,52464,1480721340388_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52464,1480721340388-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,266 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|RBW]]} size 83 2016-12-02 15:29:27,266 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 83 2016-12-02 15:29:27,267 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW]]} size 83 2016-12-02 15:29:27,267 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741842_1018 size 2259 2016-12-02 15:29:27,267 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW]]} size 83 2016-12-02 15:29:27,269 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW]]} size 83 2016-12-02 15:29:27,271 DEBUG [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,271 INFO [RS:0;10.10.9.179:52450] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52450%2C1480721340274:(num 1480721342834) 2016-12-02 15:29:27,273 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,273 INFO [RS:0;10.10.9.179:52450] regionserver.Leases(147): RS:0;10.10.9.179:52450 closing leases 2016-12-02 15:29:27,273 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 83 2016-12-02 15:29:27,273 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52510. Number of active connections: 12 2016-12-02 15:29:27,273 INFO [RS:0;10.10.9.179:52450] regionserver.Leases(150): RS:0;10.10.9.179:52450 closed leases 2016-12-02 15:29:27,274 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741838_1014 size 91 2016-12-02 15:29:27,274 INFO [RS:0;10.10.9.179:52450] hbase.ChoreService(328): Chore service for: 10.10.9.179,52450,1480721340274 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52450,1480721340274 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,276 INFO [RS:0;10.10.9.179:52450] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,276 INFO [RS:0;10.10.9.179:52450] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,276 INFO [RS:0;10.10.9.179:52450] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,276 INFO [RS:0;10.10.9.179:52450] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,276 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,276 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW]]} size 83 2016-12-02 15:29:27,276 INFO [10.10.9.179,52467,1480721340421_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52467,1480721340421-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,276 INFO [RS:0;10.10.9.179:52450] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,276 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW]]} size 83 2016-12-02 15:29:27,277 INFO [RS:0;10.10.9.179:52450] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0031 2016-12-02 15:29:27,279 DEBUG [RS:0;10.10.9.179:52450] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,279 INFO [RS:0;10.10.9.179:52450] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:0;10.10.9.179:52450.replicationSource.10.10.9.179%2C52450%2C1480721340274,1 terminated 2016-12-02 15:29:27,280 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741843_1019 size 642 2016-12-02 15:29:27,281 INFO [RS:0;10.10.9.179:52450] ipc.RpcServer(2684): Stopping server on 52450 2016-12-02 15:29:27,281 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW]]} size 83 2016-12-02 15:29:27,281 INFO [RpcServer.listener,port=52450] ipc.RpcServer$Listener(927): RpcServer.listener,port=52450: stopping 2016-12-02 15:29:27,281 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW]]} size 83 2016-12-02 15:29:27,281 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741834_1010 size 91 2016-12-02 15:29:27,282 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,282 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,283 DEBUG [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,283 INFO [RS:1;10.10.9.179:52454] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52454%2C1480721340310:(num 1480721342835) 2016-12-02 15:29:27,283 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,283 INFO [RS:1;10.10.9.179:52454] regionserver.Leases(147): RS:1;10.10.9.179:52454 closing leases 2016-12-02 15:29:27,283 INFO [RS:1;10.10.9.179:52454] regionserver.Leases(150): RS:1;10.10.9.179:52454 closed leases 2016-12-02 15:29:27,283 DEBUG [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,283 INFO [RS:4;10.10.9.179:52467] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52467%2C1480721340421:(num 1480721342834) 2016-12-02 15:29:27,283 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,283 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW]]} size 83 2016-12-02 15:29:27,283 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52511. Number of active connections: 11 2016-12-02 15:29:27,285 INFO [RS:4;10.10.9.179:52467] regionserver.Leases(147): RS:4;10.10.9.179:52467 closing leases 2016-12-02 15:29:27,285 INFO [RS:4;10.10.9.179:52467] regionserver.Leases(150): RS:4;10.10.9.179:52467 closed leases 2016-12-02 15:29:27,285 INFO [RS:1;10.10.9.179:52454] hbase.ChoreService(328): Chore service for: 10.10.9.179,52454,1480721340310 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52454,1480721340310 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,285 INFO [RS:1;10.10.9.179:52454] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,287 INFO [RS:1;10.10.9.179:52454] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,287 INFO [10.10.9.179,52485,1480721340604_ChoreService_1] hbase.ScheduledChore(183): Chore: 10.10.9.179,52485,1480721340604-MemstoreFlusherChore was stopped 2016-12-02 15:29:27,287 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|RBW]]} size 83 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52450-0x158c1de825b0005, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741840_1016 size 91 2016-12-02 15:29:27,287 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52507. Number of active connections: 10 2016-12-02 15:29:27,287 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,287 INFO [RS:4;10.10.9.179:52467] hbase.ChoreService(328): Chore service for: 10.10.9.179,52467,1480721340421 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52467,1480721340421 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,287 INFO [RS:4;10.10.9.179:52467] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,288 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,288 DEBUG [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,288 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,288 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52450,1480721340274] 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,288 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,290 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,287 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52450,1480721340274 2016-12-02 15:29:27,291 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,287 INFO [RS:1;10.10.9.179:52454] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,291 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,290 DEBUG [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,291 INFO [RS:6;10.10.9.179:52476] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52476%2C1480721340506:(num 1480721342835) 2016-12-02 15:29:27,291 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,290 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,290 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52450,1480721340274 expired; onlineServers=10 2016-12-02 15:29:27,291 DEBUG [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,291 INFO [RS:9;10.10.9.179:52485] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52485%2C1480721340604:(num 1480721342818) 2016-12-02 15:29:27,291 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,293 INFO [RS:9;10.10.9.179:52485] regionserver.Leases(147): RS:9;10.10.9.179:52485 closing leases 2016-12-02 15:29:27,293 INFO [RS:9;10.10.9.179:52485] regionserver.Leases(150): RS:9;10.10.9.179:52485 closed leases 2016-12-02 15:29:27,288 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52450,1480721340274; zookeeper connection closed. 2016-12-02 15:29:27,293 INFO [RS:0;10.10.9.179:52450] regionserver.HRegionServer(1166): RS:0;10.10.9.179:52450 exiting 2016-12-02 15:29:27,288 INFO [RS:2;10.10.9.179:52460] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52460%2C1480721340350:(num 1480721342836) 2016-12-02 15:29:27,293 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,293 INFO [RS:9;10.10.9.179:52485] hbase.ChoreService(328): Chore service for: 10.10.9.179,52485,1480721340604 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52485,1480721340604 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,288 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,288 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,288 INFO [RS:4;10.10.9.179:52467] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,295 INFO [RS:4;10.10.9.179:52467] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,295 INFO [RS:4;10.10.9.179:52467] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,295 DEBUG [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,295 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@72956c4e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@72956c4e 2016-12-02 15:29:27,293 INFO [RS:2;10.10.9.179:52460] regionserver.Leases(147): RS:2;10.10.9.179:52460 closing leases 2016-12-02 15:29:27,296 INFO [RS:2;10.10.9.179:52460] regionserver.Leases(150): RS:2;10.10.9.179:52460 closed leases 2016-12-02 15:29:27,293 DEBUG [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,293 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52513. Number of active connections: 8 2016-12-02 15:29:27,291 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52505. Number of active connections: 9 2016-12-02 15:29:27,291 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2016-12-02 15:29:27,291 DEBUG [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,296 INFO [RS:5;10.10.9.179:52473] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52473%2C1480721340476:(num 1480721342872) 2016-12-02 15:29:27,296 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,291 INFO [RS:6;10.10.9.179:52476] regionserver.Leases(147): RS:6;10.10.9.179:52476 closing leases 2016-12-02 15:29:27,296 INFO [RS:6;10.10.9.179:52476] regionserver.Leases(150): RS:6;10.10.9.179:52476 closed leases 2016-12-02 15:29:27,291 INFO [RS:1;10.10.9.179:52454] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:1;10.10.9.179:52454] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,296 INFO [RS:5;10.10.9.179:52473] regionserver.Leases(147): RS:5;10.10.9.179:52473 closing leases 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.Leases(150): RS:5;10.10.9.179:52473 closed leases 2016-12-02 15:29:27,296 DEBUG [RpcServer.reader=1,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=1,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52506. Number of active connections: 7 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] hbase.ChoreService(328): Chore service for: 10.10.9.179,52473,1480721340476 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52473,1480721340476 Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:1;10.10.9.179:52454] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002a 2016-12-02 15:29:27,296 INFO [RS:2;10.10.9.179:52460] hbase.ChoreService(328): Chore service for: 10.10.9.179,52460,1480721340350 had [[ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52460,1480721340350 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,296 INFO [RS:7;10.10.9.179:52479] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52479%2C1480721340539:(num 1480721342836) 2016-12-02 15:29:27,298 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,298 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW]]} size 0 2016-12-02 15:29:27,295 INFO [RS:8;10.10.9.179:52482] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52482%2C1480721340569:(num 1480721342835) 2016-12-02 15:29:27,298 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,299 INFO [RS:8;10.10.9.179:52482] regionserver.Leases(147): RS:8;10.10.9.179:52482 closing leases 2016-12-02 15:29:27,300 DEBUG [RS:1;10.10.9.179:52454] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,300 INFO [RS:1;10.10.9.179:52454] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:1;10.10.9.179:52454.replicationSource.10.10.9.179%2C52454%2C1480721340310,1 terminated 2016-12-02 15:29:27,295 INFO [RS:4;10.10.9.179:52467] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,295 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,295 INFO [RS:9;10.10.9.179:52485] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,302 INFO [RS:9;10.10.9.179:52485] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,303 INFO [RS:9;10.10.9.179:52485] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,303 INFO [RS:9;10.10.9.179:52485] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,302 INFO [RS:1;10.10.9.179:52454] ipc.RpcServer(2684): Stopping server on 52454 2016-12-02 15:29:27,300 INFO [RS:8;10.10.9.179:52482] regionserver.Leases(150): RS:8;10.10.9.179:52482 closed leases 2016-12-02 15:29:27,298 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,303 INFO [IPC Server handler 8 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:27,298 INFO [RS:7;10.10.9.179:52479] regionserver.Leases(147): RS:7;10.10.9.179:52479 closing leases 2016-12-02 15:29:27,303 INFO [RS:7;10.10.9.179:52479] regionserver.Leases(150): RS:7;10.10.9.179:52479 closed leases 2016-12-02 15:29:27,298 INFO [RS:2;10.10.9.179:52460] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,303 INFO [RS:2;10.10.9.179:52460] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,303 INFO [RS:2;10.10.9.179:52460] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,304 INFO [RS:2;10.10.9.179:52460] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,297 INFO [RS:5;10.10.9.179:52473] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,297 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,297 INFO [RS:6;10.10.9.179:52476] hbase.ChoreService(328): Chore service for: 10.10.9.179,52476,1480721340506 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52476,1480721340506 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,297 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52512. Number of active connections: 6 2016-12-02 15:29:27,304 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52514. Number of active connections: 5 2016-12-02 15:29:27,304 DEBUG [RpcServer.reader=0,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=0,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52509. Number of active connections: 4 2016-12-02 15:29:27,304 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,304 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,304 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,305 INFO [RS:5;10.10.9.179:52473] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0027 2016-12-02 15:29:27,304 INFO [RS:6;10.10.9.179:52476] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,305 INFO [RS:6;10.10.9.179:52476] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,305 INFO [RS:6;10.10.9.179:52476] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,305 INFO [RS:6;10.10.9.179:52476] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,304 INFO [RS:7;10.10.9.179:52479] hbase.ChoreService(328): Chore service for: 10.10.9.179,52479,1480721340539 had [[ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52479,1480721340539 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52479,1480721340539-MemstoreFlusherChore Period: 100 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,304 INFO [RS:2;10.10.9.179:52460] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,304 INFO [RS:8;10.10.9.179:52482] hbase.ChoreService(328): Chore service for: 10.10.9.179,52482,1480721340569 had [[ScheduledChore: Name: 10.10.9.179,52482,1480721340569-MemstoreFlusherChore Period: 100 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52482,1480721340569 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,303 INFO [RpcServer.listener,port=52454] ipc.RpcServer$Listener(927): RpcServer.listener,port=52454: stopping 2016-12-02 15:29:27,303 INFO [RS:4;10.10.9.179:52467] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0029 2016-12-02 15:29:27,303 INFO [RS:9;10.10.9.179:52485] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [RS:4;10.10.9.179:52467] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,305 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,305 INFO [RS:8;10.10.9.179:52482] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,307 INFO [RS:8;10.10.9.179:52482] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,307 INFO [RS:8;10.10.9.179:52482] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,307 INFO [RS:8;10.10.9.179:52482] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,305 DEBUG [RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1] regionserver.HBaseInterClusterReplicationEndpoint(182): Interrupted while sleeping between retries 2016-12-02 15:29:27,305 DEBUG [RS:5;10.10.9.179:52473] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,307 INFO [RS:5;10.10.9.179:52473] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:5;10.10.9.179:52473.replicationSource.10.10.9.179%2C52473%2C1480721340476,1 terminated 2016-12-02 15:29:27,307 INFO [RS:5;10.10.9.179:52473] ipc.RpcServer(2684): Stopping server on 52473 2016-12-02 15:29:27,305 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,305 INFO [RS:7;10.10.9.179:52479] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,307 INFO [RS:7;10.10.9.179:52479] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,307 INFO [RS:7;10.10.9.179:52479] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,305 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-491a9a80-1bde-48c2-bd71-6e53e8242d74:NORMAL:127.0.0.1:52440|FINALIZED]]} size 0 2016-12-02 15:29:27,305 INFO [RS:6;10.10.9.179:52476] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,307 INFO [RS:7;10.10.9.179:52479] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,307 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,308 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,308 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW]]} size 0 2016-12-02 15:29:27,307 INFO [RpcServer.listener,port=52473] ipc.RpcServer$Listener(927): RpcServer.listener,port=52473: stopping 2016-12-02 15:29:27,307 INFO [RS:8;10.10.9.179:52482] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,306 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52454,1480721340310; zookeeper connection closed. 2016-12-02 15:29:27,309 INFO [RS:1;10.10.9.179:52454] regionserver.HRegionServer(1166): RS:1;10.10.9.179:52454 exiting 2016-12-02 15:29:27,306 INFO [RS:9;10.10.9.179:52485] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002c 2016-12-02 15:29:27,309 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@662ee1bb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@662ee1bb 2016-12-02 15:29:27,306 INFO [RS:4;10.10.9.179:52467] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:4;10.10.9.179:52467.replicationSource.10.10.9.179%2C52467%2C1480721340421,1 terminated 2016-12-02 15:29:27,306 INFO [RS:2;10.10.9.179:52460] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b0028 2016-12-02 15:29:27,306 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52454,1480721340310] 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52454-0x158c1de825b0006, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,306 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52454,1480721340310 2016-12-02 15:29:27,310 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52454,1480721340310 expired; onlineServers=9 2016-12-02 15:29:27,310 INFO [RS:4;10.10.9.179:52467] ipc.RpcServer(2684): Stopping server on 52467 2016-12-02 15:29:27,308 INFO [RS:7;10.10.9.179:52479] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,308 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7977f021-6728-4c15-9596-ae0129596140:NORMAL:127.0.0.1:52424|RBW], ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|FINALIZED]]} size 0 2016-12-02 15:29:27,311 INFO [RpcServer.listener,port=52467] ipc.RpcServer$Listener(927): RpcServer.listener,port=52467: stopping 2016-12-02 15:29:27,311 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,311 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,311 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,315 INFO [RS:8;10.10.9.179:52482] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002b 2016-12-02 15:29:27,314 INFO [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=23, memsize=234, hasBloomFilter=true, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/.tmp/7aefd3febd524edc944ec9cb4958c125 2016-12-02 15:29:27,315 DEBUG [RS:2;10.10.9.179:52460] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,315 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52473,1480721340476; zookeeper connection closed. 2016-12-02 15:29:27,315 INFO [RS:5;10.10.9.179:52473] regionserver.HRegionServer(1166): RS:5;10.10.9.179:52473 exiting 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,316 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,316 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,316 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,316 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1cda82fb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1cda82fb 2016-12-02 15:29:27,316 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52467,1480721340421; zookeeper connection closed. 2016-12-02 15:29:27,316 INFO [RS:4;10.10.9.179:52467] regionserver.HRegionServer(1166): RS:4;10.10.9.179:52467 exiting 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,316 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5b537dbd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5b537dbd 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52467-0x158c1de825b0009, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,315 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,315 INFO [RS:7;10.10.9.179:52479] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002f 2016-12-02 15:29:27,315 DEBUG [RS:9;10.10.9.179:52485] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,316 INFO [RS:9;10.10.9.179:52485] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:9;10.10.9.179:52485.replicationSource.10.10.9.179%2C52485%2C1480721340604,1 terminated 2016-12-02 15:29:27,314 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52424 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-97f29db3-9aee-4aff-8ada-3ef1e7e380c7:NORMAL:127.0.0.1:52432|FINALIZED], ReplicaUC[[DISK]DS-7e88facc-caeb-4cbd-a5f3-51ffa3e83242:NORMAL:127.0.0.1:52424|FINALIZED]]} size 0 2016-12-02 15:29:27,314 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,318 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,314 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,318 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,314 INFO [RS:6;10.10.9.179:52476] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002d 2016-12-02 15:29:27,314 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52473,1480721340476 2016-12-02 15:29:27,318 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52473-0x158c1de825b000a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,318 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,318 DEBUG [RS:7;10.10.9.179:52479] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,318 INFO [RS:9;10.10.9.179:52485] ipc.RpcServer(2684): Stopping server on 52485 2016-12-02 15:29:27,319 DEBUG [RS:6;10.10.9.179:52476] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,316 INFO [RS:2;10.10.9.179:52460] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:2;10.10.9.179:52460.replicationSource.10.10.9.179%2C52460%2C1480721340350,1 terminated 2016-12-02 15:29:27,316 DEBUG [RS:8;10.10.9.179:52482] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,319 INFO [RS:8;10.10.9.179:52482] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:8;10.10.9.179:52482.replicationSource.10.10.9.179%2C52482%2C1480721340569,1 terminated 2016-12-02 15:29:27,316 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,319 INFO [RS:2;10.10.9.179:52460] ipc.RpcServer(2684): Stopping server on 52460 2016-12-02 15:29:27,319 INFO [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=6, memsize=78, hasBloomFilter=true, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/.tmp/7fc229ee078644e4b92d0ca6ffe29be8 2016-12-02 15:29:27,319 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,321 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,321 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,319 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9bc3c97a-816e-4b0c-a9da-cc3c4498c1b3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,319 INFO [RS:6;10.10.9.179:52476] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:6;10.10.9.179:52476.replicationSource.10.10.9.179%2C52476%2C1480721340506,1 terminated 2016-12-02 15:29:27,319 INFO [RpcServer.listener,port=52485] ipc.RpcServer$Listener(927): RpcServer.listener,port=52485: stopping 2016-12-02 15:29:27,319 INFO [RS:7;10.10.9.179:52479] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:7;10.10.9.179:52479.replicationSource.10.10.9.179%2C52479%2C1480721340539,1 terminated 2016-12-02 15:29:27,318 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52473,1480721340476] 2016-12-02 15:29:27,322 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52485-0x158c1de825b000e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,322 INFO [RS:6;10.10.9.179:52476] ipc.RpcServer(2684): Stopping server on 52476 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,324 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,324 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,324 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,324 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,321 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,321 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,320 INFO [RpcServer.listener,port=52460] ipc.RpcServer$Listener(927): RpcServer.listener,port=52460: stopping 2016-12-02 15:29:27,321 INFO [RS:8;10.10.9.179:52482] ipc.RpcServer(2684): Stopping server on 52482 2016-12-02 15:29:27,319 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,325 INFO [RpcServer.listener,port=52482] ipc.RpcServer$Listener(927): RpcServer.listener,port=52482: stopping 2016-12-02 15:29:27,324 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,325 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,325 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,325 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,324 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52460-0x158c1de825b0007, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,324 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52460,1480721340350; zookeeper connection closed. 2016-12-02 15:29:27,325 INFO [RS:2;10.10.9.179:52460] regionserver.HRegionServer(1166): RS:2;10.10.9.179:52460 exiting 2016-12-02 15:29:27,324 INFO [RpcServer.listener,port=52476] ipc.RpcServer$Listener(927): RpcServer.listener,port=52476: stopping 2016-12-02 15:29:27,324 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=48, memsize=7.1 K, hasBloomFilter=false, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/81eecb0302bc474abb1f8b28ba92fbdb 2016-12-02 15:29:27,326 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52476,1480721340506; zookeeper connection closed. 2016-12-02 15:29:27,326 INFO [RS:6;10.10.9.179:52476] regionserver.HRegionServer(1166): RS:6;10.10.9.179:52476 exiting 2016-12-02 15:29:27,324 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52485,1480721340604; zookeeper connection closed. 2016-12-02 15:29:27,326 INFO [RS:9;10.10.9.179:52485] regionserver.HRegionServer(1166): RS:9;10.10.9.179:52485 exiting 2016-12-02 15:29:27,324 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(147): regionserver//10.10.9.179:0.leaseChecker closing leases 2016-12-02 15:29:27,326 INFO [regionserver//10.10.9.179:0.leaseChecker] regionserver.Leases(150): regionserver//10.10.9.179:0.leaseChecker closed leases 2016-12-02 15:29:27,324 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,326 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52476-0x158c1de825b000b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,326 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d3c3ec1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d3c3ec1 2016-12-02 15:29:27,324 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52473,1480721340476 expired; onlineServers=8 2016-12-02 15:29:27,327 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52467,1480721340421 2016-12-02 15:29:27,327 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52467,1480721340421] 2016-12-02 15:29:27,327 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52467,1480721340421 expired; onlineServers=7 2016-12-02 15:29:27,327 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52485,1480721340604 2016-12-02 15:29:27,327 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52485,1480721340604] 2016-12-02 15:29:27,323 INFO [RS:7;10.10.9.179:52479] ipc.RpcServer(2684): Stopping server on 52479 2016-12-02 15:29:27,327 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52485,1480721340604 expired; onlineServers=6 2016-12-02 15:29:27,327 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52482,1480721340569; zookeeper connection closed. 2016-12-02 15:29:27,327 INFO [RS:8;10.10.9.179:52482] regionserver.HRegionServer(1166): RS:8;10.10.9.179:52482 exiting 2016-12-02 15:29:27,326 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2099e2f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2099e2f5 2016-12-02 15:29:27,326 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,326 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,328 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3ff1bd7f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3ff1bd7f 2016-12-02 15:29:27,328 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,328 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,326 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52482-0x158c1de825b000d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,326 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@c8dfeda] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@c8dfeda 2016-12-02 15:29:27,325 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,328 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,327 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52460,1480721340350 2016-12-02 15:29:27,328 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52460,1480721340350] 2016-12-02 15:29:27,328 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52460,1480721340350 expired; onlineServers=5 2016-12-02 15:29:27,327 INFO [RpcServer.listener,port=52479] ipc.RpcServer$Listener(927): RpcServer.listener,port=52479: stopping 2016-12-02 15:29:27,328 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52476,1480721340506 2016-12-02 15:29:27,329 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52476,1480721340506] 2016-12-02 15:29:27,328 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/.tmp/7aefd3febd524edc944ec9cb4958c125 as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7aefd3febd524edc944ec9cb4958c125 2016-12-02 15:29:27,328 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52479-0x158c1de825b000c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:27,328 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:27,329 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52479,1480721340539; zookeeper connection closed. 2016-12-02 15:29:27,330 INFO [RS:7;10.10.9.179:52479] regionserver.HRegionServer(1166): RS:7;10.10.9.179:52479 exiting 2016-12-02 15:29:27,329 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52476,1480721340506 expired; onlineServers=4 2016-12-02 15:29:27,329 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/.tmp/7fc229ee078644e4b92d0ca6ffe29be8 as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/info/7fc229ee078644e4b92d0ca6ffe29be8 2016-12-02 15:29:27,330 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5967d0f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5967d0f8 2016-12-02 15:29:27,330 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52482,1480721340569 2016-12-02 15:29:27,330 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52482,1480721340569] 2016-12-02 15:29:27,330 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52482,1480721340569 expired; onlineServers=3 2016-12-02 15:29:27,330 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52479,1480721340539 2016-12-02 15:29:27,330 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52479,1480721340539] 2016-12-02 15:29:27,330 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52479,1480721340539 expired; onlineServers=2 2016-12-02 15:29:27,331 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.StoreFileReader(481): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81eecb0302bc474abb1f8b28ba92fbdb 2016-12-02 15:29:27,333 INFO [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/info/7fc229ee078644e4b92d0ca6ffe29be8, entries=2, sequenceid=6, filesize=4.8 K 2016-12-02 15:29:27,333 INFO [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7aefd3febd524edc944ec9cb4958c125, entries=9, sequenceid=23, filesize=5.0 K 2016-12-02 15:29:27,334 INFO [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegion(2644): Finished memstore flush of ~78 B/78, currentsize=0 B/0 for region hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. in 88ms, sequenceid=6, compaction requested=false 2016-12-02 15:29:27,334 INFO [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegion(2644): Finished memstore flush of ~234 B/234, currentsize=0 B/0 for region testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. in 108ms, sequenceid=23, compaction requested=false 2016-12-02 15:29:27,340 INFO [StoreCloserThread-hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02.-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:27,340 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac' to region=77b3f337f846c19e5ea9c885289510ac hfile=7d32bddca91e4f98abab98f3a0fb587e 2016-12-02 15:29:27,340 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] regionserver.StoreFileInfo(455): reference 'hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14' to region=6ef423c0591830e60ab18a766b7caf14 hfile=b9d9a5aa40c343a1b265b67fc35cef21 2016-12-02 15:29:27,340 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] regionserver.HStore(2445): Moving the files [hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac-hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/77b3f337f846c19e5ea9c885289510ac/f/7d32bddca91e4f98abab98f3a0fb587e-top, hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14-hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/6ef423c0591830e60ab18a766b7caf14/f/b9d9a5aa40c343a1b265b67fc35cef21-top] to archive 2016-12-02 15:29:27,343 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/namespace/5450cdacaee02275eb0f7d3bc71c5f02/recovered.edits/9.seqid to file, newSeqId=9, maxSeqId=2 2016-12-02 15:29:27,343 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-9ec9db34-4e19-4191-b144-62275f2077e0:NORMAL:127.0.0.1:52436|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW]]} size 0 2016-12-02 15:29:27,344 INFO [RS_CLOSE_REGION-10.10.9.179:52448-0] regionserver.HRegion(1643): Closed hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:27,345 DEBUG [RS_CLOSE_REGION-10.10.9.179:52448-0] handler.CloseRegionHandler(122): Closed hbase:namespace,,1480721342265.5450cdacaee02275eb0f7d3bc71c5f02. 2016-12-02 15:29:27,345 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52436 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|FINALIZED]]} size 0 2016-12-02 15:29:27,347 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] backup.HFileArchiver(235): Archiving compacted store files. 2016-12-02 15:29:27,347 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-cc7f8b08-497e-4564-9350-b8bad4875d61:NORMAL:127.0.0.1:52436|FINALIZED], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:27,347 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=48, memsize=632, hasBloomFilter=false, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/ae90a65604484e859ec5745faf008317 2016-12-02 15:29:27,350 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] backup.HFileArchiver(427): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac, to hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/archive/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/7d32bddca91e4f98abab98f3a0fb587e.77b3f337f846c19e5ea9c885289510ac 2016-12-02 15:29:27,351 DEBUG [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] backup.HFileArchiver(427): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14, to hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/archive/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/f/b9d9a5aa40c343a1b265b67fc35cef21.6ef423c0591830e60ab18a766b7caf14 2016-12-02 15:29:27,355 INFO [StoreCloserThread-testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66.-1] regionserver.HStore(874): Closed f 2016-12-02 15:29:27,358 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/default/testRegionMerge/0115985df04bcc343330799dd037ce66/recovered.edits/26.seqid to file, newSeqId=26, maxSeqId=11 2016-12-02 15:29:27,359 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.replication.TestMasterReplication$CoprocessorCounter 2016-12-02 15:29:27,360 INFO [RS_CLOSE_REGION-10.10.9.179:52464-1] regionserver.HRegion(1643): Closed testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:27,360 DEBUG [RS_CLOSE_REGION-10.10.9.179:52464-1] handler.CloseRegionHandler(122): Closed testRegionMerge,,1480721359983.0115985df04bcc343330799dd037ce66. 2016-12-02 15:29:27,361 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741858_1034{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,362 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741858_1034{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,363 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741858_1034 size 5962 2016-12-02 15:29:27,363 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=48, memsize=1.2 K, hasBloomFilter=false, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/cd953f2d72ba43ddbb1f57129d8c924a 2016-12-02 15:29:27,371 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741859_1035{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 0 2016-12-02 15:29:27,372 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741859_1035{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-acaec845-0744-4b60-8e8f-289bfadf69f9:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-2d2bb05a-61b0-48bc-8aa0-6491a7a534e0:NORMAL:127.0.0.1:52416|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 0 2016-12-02 15:29:27,374 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741859_1035 size 5080 2016-12-02 15:29:27,374 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=48, memsize=511, hasBloomFilter=false, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/18a74e05af234a22b5296f1ae482498c 2016-12-02 15:29:27,382 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52416 is added to blk_1073741860_1036{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|FINALIZED]]} size 0 2016-12-02 15:29:27,384 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741860_1036{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|FINALIZED], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:27,385 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741860_1036{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-3fb69a78-3dd2-4315-8972-72f6ba4e1270:NORMAL:127.0.0.1:52416|FINALIZED], ReplicaUC[[DISK]DS-378ff72f-b0d6-4d05-815b-ae7795fe2171:NORMAL:127.0.0.1:52407|FINALIZED]]} size 0 2016-12-02 15:29:27,385 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.DefaultStoreFlusher(91): Flushed, sequenceid=48, memsize=272, hasBloomFilter=false, into tmp file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/010b1a5324e64a1abc0a56faa846aa2b 2016-12-02 15:29:27,388 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/81eecb0302bc474abb1f8b28ba92fbdb as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/info/81eecb0302bc474abb1f8b28ba92fbdb 2016-12-02 15:29:27,390 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.StoreFileReader(481): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81eecb0302bc474abb1f8b28ba92fbdb 2016-12-02 15:29:27,390 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/info/81eecb0302bc474abb1f8b28ba92fbdb, entries=32, sequenceid=48, filesize=8.5 K 2016-12-02 15:29:27,390 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/ae90a65604484e859ec5745faf008317 as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_barrier/ae90a65604484e859ec5745faf008317 2016-12-02 15:29:27,393 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_barrier/ae90a65604484e859ec5745faf008317, entries=8, sequenceid=48, filesize=5.2 K 2016-12-02 15:29:27,393 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/cd953f2d72ba43ddbb1f57129d8c924a as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_meta/cd953f2d72ba43ddbb1f57129d8c924a 2016-12-02 15:29:27,396 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_meta/cd953f2d72ba43ddbb1f57129d8c924a, entries=13, sequenceid=48, filesize=5.8 K 2016-12-02 15:29:27,397 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/18a74e05af234a22b5296f1ae482498c as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_position/18a74e05af234a22b5296f1ae482498c 2016-12-02 15:29:27,399 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/rep_position/18a74e05af234a22b5296f1ae482498c, entries=5, sequenceid=48, filesize=5.0 K 2016-12-02 15:29:27,400 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegionFileSystem(395): Committing store file hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/.tmp/010b1a5324e64a1abc0a56faa846aa2b as hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/table/010b1a5324e64a1abc0a56faa846aa2b 2016-12-02 15:29:27,403 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HStore(970): Added hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/table/010b1a5324e64a1abc0a56faa846aa2b, entries=6, sequenceid=48, filesize=4.8 K 2016-12-02 15:29:27,404 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegion(2644): Finished memstore flush of ~9.64 KB/9867, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 149ms, sequenceid=48, compaction requested=false 2016-12-02 15:29:27,406 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed info 2016-12-02 15:29:27,407 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_barrier 2016-12-02 15:29:27,409 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_meta 2016-12-02 15:29:27,410 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed rep_position 2016-12-02 15:29:27,412 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(874): Closed table 2016-12-02 15:29:27,416 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] wal.WALSplitter(734): Wrote region seqId=hdfs://localhost:52402/user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/data/hbase/meta/1588230740/recovered.edits/51.seqid to file, newSeqId=51, maxSeqId=3 2016-12-02 15:29:27,416 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] coprocessor.CoprocessorHost(292): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2016-12-02 15:29:27,417 INFO [RS_CLOSE_META-10.10.9.179:52448-0] regionserver.HRegion(1643): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:27,417 DEBUG [RS_CLOSE_META-10.10.9.179:52448-0] handler.CloseRegionHandler(122): Closed hbase:meta,,1.1588230740 2016-12-02 15:29:27,427 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52464,1480721340388; all regions closed. 2016-12-02 15:29:27,428 DEBUG [RS:3;10.10.9.179:52464] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:27,436 INFO [IPC Server handler 9 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52440 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 83 2016-12-02 15:29:27,438 INFO [IPC Server handler 1 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 83 2016-12-02 15:29:27,439 INFO [IPC Server handler 0 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-566d1bd7-2ec8-4bbb-b01c-f4d4f53c0897:NORMAL:127.0.0.1:52440|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW], ReplicaUC[[DISK]DS-7ccb6605-886f-405c-ad06-219ad508d964:NORMAL:127.0.0.1:52420|RBW]]} size 83 2016-12-02 15:29:27,441 DEBUG [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52464%2C1480721340388:(num 1480721342835) 2016-12-02 15:29:27,441 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.Leases(147): RS:3;10.10.9.179:52464 closing leases 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.Leases(150): RS:3;10.10.9.179:52464 closed leases 2016-12-02 15:29:27,441 DEBUG [RpcServer.reader=2,bindAddress=10.10.9.179,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.reader=2,bindAddress=10.10.9.179,port=52448: disconnecting client 10.10.9.179:52508. Number of active connections: 3 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] hbase.ChoreService(328): Chore service for: 10.10.9.179,52464,1480721340388 had [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52464,1480721340388 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.CompactSplitThread(399): Waiting for Split Thread to finish... 2016-12-02 15:29:27,441 INFO [regionserver//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.CompactSplitThread(399): Waiting for Merge Thread to finish... 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.CompactSplitThread(399): Waiting for Large Compaction Thread to finish... 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.CompactSplitThread(399): Waiting for Small Compaction Thread to finish... 2016-12-02 15:29:27,441 INFO [RS:3;10.10.9.179:52464] regionserver.ReplicationSource(391): Closing source 1 because: Region server is closing 2016-12-02 15:29:27,442 DEBUG [RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1] regionserver.HBaseInterClusterReplicationEndpoint(182): Interrupted while sleeping between retries 2016-12-02 15:29:27,442 INFO [RS:3;10.10.9.179:52464] client.ConnectionImplementation(1185): Closing zookeeper sessionid=0x158c1de825b002e 2016-12-02 15:29:27,442 DEBUG [RS:3;10.10.9.179:52464] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,443 INFO [RS:3;10.10.9.179:52464] regionserver.ReplicationSource(411): ReplicationSourceWorker RS:3;10.10.9.179:52464.replicationSource.10.10.9.179%2C52464%2C1480721340388,1 terminated 2016-12-02 15:29:27,443 INFO [RS:3;10.10.9.179:52464] ipc.RpcServer(2684): Stopping server on 52464 2016-12-02 15:29:27,443 INFO [RpcServer.listener,port=52464] ipc.RpcServer$Listener(927): RpcServer.listener,port=52464: stopping 2016-12-02 15:29:27,443 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,443 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,444 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): regionserver:52464-0x158c1de825b0008, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:27,444 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52464,1480721340388 2016-12-02 15:29:27,444 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52464,1480721340388] 2016-12-02 15:29:27,444 INFO [main-EventThread] master.ServerManager(605): Cluster shutdown set; 10.10.9.179,52464,1480721340388 expired; onlineServers=1 2016-12-02 15:29:27,444 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52464,1480721340388; zookeeper connection closed. 2016-12-02 15:29:27,444 INFO [RS:3;10.10.9.179:52464] regionserver.HRegionServer(1166): RS:3;10.10.9.179:52464 exiting 2016-12-02 15:29:27,444 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5cc6a1a2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5cc6a1a2 2016-12-02 15:29:27,444 INFO [main] util.JVMClusterUtil(331): Shutdown of 1 master(s) and 10 regionserver(s) complete 2016-12-02 15:29:27,453 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1119): stopping server 10.10.9.179,52448,1480721340079; all regions closed. 2016-12-02 15:29:27,453 DEBUG [M:0;10.10.9.179:52448] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:27,457 INFO [IPC Server handler 3 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52420 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW]]} size 83 2016-12-02 15:29:27,459 INFO [IPC Server handler 5 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52428 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW]]} size 83 2016-12-02 15:29:27,461 INFO [IPC Server handler 7 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52403 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-5c11c7e7-70d3-4070-88eb-0c965fbb83c1:NORMAL:127.0.0.1:52428|RBW], ReplicaUC[[DISK]DS-6e73f07a-7e35-41e0-8f31-aa3eaf2f4083:NORMAL:127.0.0.1:52420|RBW], ReplicaUC[[DISK]DS-228e15b4-22f0-4d12-a083-5b9d180b1d06:NORMAL:127.0.0.1:52403|RBW]]} size 83 2016-12-02 15:29:27,463 DEBUG [M:0;10.10.9.179:52448] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,463 INFO [M:0;10.10.9.179:52448] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52448%2C1480721340079.meta:.meta(num 1480721341609) 2016-12-02 15:29:27,463 DEBUG [M:0;10.10.9.179:52448] wal.FSHLog(427): Closing WAL writer in /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/WALs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:27,465 INFO [IPC Server handler 6 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52412 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 83 2016-12-02 15:29:27,466 INFO [IPC Server handler 2 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52407 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 83 2016-12-02 15:29:27,467 INFO [IPC Server handler 4 on 52402] blockmanagement.BlockManager(2624): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52432 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-7b1dc621-03e2-4208-a089-45882edf6203:NORMAL:127.0.0.1:52432|RBW], ReplicaUC[[DISK]DS-370520ed-6fc4-4604-b142-a5d4284a311c:NORMAL:127.0.0.1:52407|RBW], ReplicaUC[[DISK]DS-09afa37d-7680-43c2-9a55-48fdc90bdca3:NORMAL:127.0.0.1:52412|RBW]]} size 83 2016-12-02 15:29:27,468 DEBUG [M:0;10.10.9.179:52448] wal.AbstractFSWAL(821): Moved 1 WAL file(s) to /user/tyu/test-data/fc447f92-e9d3-438b-b123-df8b994126fd/oldWALs 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] wal.AbstractFSWAL(823): Closed WAL: FSHLog 10.10.9.179%2C52448%2C1480721340079:(num 1480721342478) 2016-12-02 15:29:27,469 DEBUG [M:0;10.10.9.179:52448] ipc.AbstractRpcClient(478): Stopping rpc client 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] regionserver.Leases(147): M:0;10.10.9.179:52448 closing leases 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] regionserver.Leases(150): M:0;10.10.9.179:52448 closed leases 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] hbase.ChoreService(328): Chore service for: 10.10.9.179,52448,1480721340079 had [[ScheduledChore: Name: CatalogJanitor-10.10.9.179:52448 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52448,1480721340079-ExpiredMobFileCleanerChore Period: 86400 Unit: SECONDS], [ScheduledChore: Name: 10.10.9.179,52448,1480721340079-MobCompactionChore Period: 604800 Unit: SECONDS], [ScheduledChore: Name: ReplicationMetaCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: MovedRegionsCleaner for region 10.10.9.179,52448,1480721340079 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52448,1480721340079-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52448,1480721340079-ClusterStatusChore Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: 10.10.9.179,52448,1480721340079-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,469 INFO [master//10.10.9.179:0.logRoller] regionserver.LogRoller(173): LogRoller exiting. 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] master.MasterMobCompactionThread(174): Waiting for Mob Compaction Thread to finish... 2016-12-02 15:29:27,469 INFO [M:0;10.10.9.179:52448] master.MasterMobCompactionThread(174): Waiting for Region Server Mob Compaction Thread to finish... 2016-12-02 15:29:27,469 DEBUG [M:0;10.10.9.179:52448] master.HMaster(1026): Stopping service threads 2016-12-02 15:29:27,470 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2016-12-02 15:29:27,470 INFO [M:0;10.10.9.179:52448] hbase.ChoreService(328): Chore service for: 10.10.9.179,52448,1480721340079_splitLogManager_ had [[ScheduledChore: Name: SplitLogManager Timeout Monitor Period: 1000 Unit: MILLISECONDS]] on shutdown 2016-12-02 15:29:27,470 INFO [M:0;10.10.9.179:52448] flush.MasterFlushTableProcedureManager(78): stop: server shutting down. 2016-12-02 15:29:27,470 INFO [M:0;10.10.9.179:52448] ipc.RpcServer(2684): Stopping server on 52448 2016-12-02 15:29:27,470 DEBUG [main-EventThread] zookeeper.ZKUtil(365): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Set watcher on znode that does not yet exist, /1/master 2016-12-02 15:29:27,470 INFO [RpcServer.listener,port=52448] ipc.RpcServer$Listener(927): RpcServer.listener,port=52448: stopping 2016-12-02 15:29:27,470 INFO [RpcServer.responder] ipc.RpcServer$Responder(1145): RpcServer.responder: stopped 2016-12-02 15:29:27,470 INFO [RpcServer.responder] ipc.RpcServer$Responder(1048): RpcServer.responder: stopping 2016-12-02 15:29:27,470 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52448: disconnecting client 10.10.9.179:54256. Number of active connections: 2 2016-12-02 15:29:27,471 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52448: disconnecting client 10.10.9.179:54352. Number of active connections: 1 2016-12-02 15:29:27,471 DEBUG [RpcServer.listener,port=52448] ipc.RpcServer$ConnectionManager(3134): RpcServer.listener,port=52448: disconnecting client 10.10.9.179:54355. Number of active connections: 0 2016-12-02 15:29:27,472 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): master:52448-0x158c1de825b0004, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/10.10.9.179,52448,1480721340079 2016-12-02 15:29:27,472 INFO [main-EventThread] zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [10.10.9.179,52448,1480721340079] 2016-12-02 15:29:27,472 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1163): stopping server 10.10.9.179,52448,1480721340079; zookeeper connection closed. 2016-12-02 15:29:27,472 INFO [M:0;10.10.9.179:52448] regionserver.HRegionServer(1166): M:0;10.10.9.179:52448 exiting 2016-12-02 15:29:27,476 INFO [main] zookeeper.MiniZooKeeperCluster(319): Shutdown MiniZK cluster with all ZK servers 2016-12-02 15:29:27,476 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,483 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin-0x158c1de825b0042, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,575 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x48767423-0x158c1de825b0021, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): cluster2-0x158c1de825b0001, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(534): cluster2-0x158c1de825b0001, quorum=localhost:60648, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x11d3cbcb-0x158c1de825b0020, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [RS:2;10.10.9.179:52460-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x11d3cbcb-0x158c1de825b0020, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x2fdb87ce-0x158c1de825b0026, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [RS:5;10.10.9.179:52473-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x2fdb87ce-0x158c1de825b0026, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x77888435-0x158c1de825b0002, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x77888435-0x158c1de825b0002, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): ReplicationAdmin-0x158c1de825b0042, quorum=localhost:60648, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x147377a1-0x158c1de825b001e, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [RS:0;10.10.9.179:52450-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x147377a1-0x158c1de825b001e, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,576 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,576 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): ReplicationAdmin-0x158c1de825b001c, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x65776f35-0x158c1de825b0022, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [RS:8;10.10.9.179:52482-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x65776f35-0x158c1de825b0022, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): replicationLogCleaner-0x158c1de825b0040, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): replicationLogCleaner-0x158c1de825b0040, quorum=localhost:60648, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x6b943405-0x158c1de825b0041, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [10.10.9.179:52887.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x6b943405-0x158c1de825b0041, quorum=localhost:60648, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x484a949a-0x158c1de825b0043, quorum=localhost:60648, baseZNode=/2 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [RS:0;10.10.9.179:52893-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x484a949a-0x158c1de825b0043, quorum=localhost:60648, baseZNode=/2 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:4;10.10.9.179:52467-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x48767423-0x158c1de825b0021, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x305c2874-0x158c1de825b0024, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [RS:1;10.10.9.179:52454-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x305c2874-0x158c1de825b0024, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x57479bf4-0x158c1de825b001d, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [RS:3;10.10.9.179:52464-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x57479bf4-0x158c1de825b001d, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): cluster1-0x158c1de825b0000, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,577 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(534): cluster1-0x158c1de825b0000, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x3cd0811-0x158c1de825b0025, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [RS:6;10.10.9.179:52476-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x3cd0811-0x158c1de825b0025, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): replicationLogCleaner-0x158c1de825b001a, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): replicationLogCleaner-0x158c1de825b001a, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x625d36f3-0x158c1de825b001b, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [10.10.9.179:52448.activeMasterManager-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x625d36f3-0x158c1de825b001b, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x552134b6-0x158c1de825b0023, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [RS:7;10.10.9.179:52479-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x552134b6-0x158c1de825b0023, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(466): hconnection-0x7e936139-0x158c1de825b001f, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [RS:9;10.10.9.179:52485-EventThread] zookeeper.ZooKeeperWatcher(534): hconnection-0x7e936139-0x158c1de825b001f, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,575 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(466): ReplicationAdmin-0x158c1de825b0003, quorum=localhost:60648, baseZNode=/1 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2016-12-02 15:29:27,578 DEBUG [main-EventThread] zookeeper.ZooKeeperWatcher(534): ReplicationAdmin-0x158c1de825b0003, quorum=localhost:60648, baseZNode=/1 Received Disconnected from ZooKeeper, ignoring 2016-12-02 15:29:27,591 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data19/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data20/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid b8404b60-9359-40dc-a42c-a42bdfdf63da) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:27,591 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data19/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data20/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid b8404b60-9359-40dc-a42c-a42bdfdf63da) service to localhost/127.0.0.1:52402 2016-12-02 15:29:27,596 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,603 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,710 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data17/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data18/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 9caba0a3-d6ac-4110-b857-c6cd9fe9c5a0) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:27,710 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data17/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data18/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 9caba0a3-d6ac-4110-b857-c6cd9fe9c5a0) service to localhost/127.0.0.1:52402 2016-12-02 15:29:27,715 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,722 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,828 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data15/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data16/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 4f1b192e-7268-4d48-b256-8c2fb21cfe81) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:27,828 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data15/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data16/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 4f1b192e-7268-4d48-b256-8c2fb21cfe81) service to localhost/127.0.0.1:52402 2016-12-02 15:29:27,835 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,842 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:27,949 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data13/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data14/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid c2a456be-b211-4171-a22f-943beeeac333) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:27,949 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data13/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data14/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid c2a456be-b211-4171-a22f-943beeeac333) service to localhost/127.0.0.1:52402 2016-12-02 15:29:27,955 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:27,962 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,010 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data11/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data12/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 8ec5da31-8fea-4bc3-9095-c0f196238942) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,072 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:28,078 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,185 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data9/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data10/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid a0f52948-2a30-4098-ae5a-f778692c8af0) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:28,185 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data9/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data10/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid a0f52948-2a30-4098-ae5a-f778692c8af0) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,191 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:28,197 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,306 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data7/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data8/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid bd32d46d-6962-42fa-9a28-5f84d4a16693) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:28,308 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data7/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data8/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid bd32d46d-6962-42fa-9a28-5f84d4a16693) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,314 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:28,320 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,426 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data5/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 6e0c0a53-46e3-4f17-80e7-c5fa4983be51) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:28,426 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data5/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data6/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 6e0c0a53-46e3-4f17-80e7-c5fa4983be51) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,431 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:28,436 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,542 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data3/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 32930b8d-27fc-4d06-8858-24b922636671) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:28,542 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data3/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data4/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 32930b8d-27fc-4d06-8858-24b922636671) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,545 WARN [main] datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called 2016-12-02 15:29:28,550 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,551 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data1/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(704): BPOfferService for Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 605bc8ad-63d6-400c-9e74-24ec0df4749d) service to localhost/127.0.0.1:52402 interrupted 2016-12-02 15:29:28,551 WARN [DataNode: [[[DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data1/, [DISK]file:/Users/tyu/trunk/hbase-server/target/test-data/f4a27eeb-b2d1-4027-ae33-2b7a80684528/dfscluster_b71a223f-9583-415d-95bc-a6fdc3e16a05/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:52402] datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1145896233-10.10.9.179-1480721336003 (Datanode Uuid 605bc8ad-63d6-400c-9e74-24ec0df4749d) service to localhost/127.0.0.1:52402 2016-12-02 15:29:28,562 INFO [main] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2016-12-02 15:29:28,616 INFO [main] hbase.HBaseTestingUtility(1175): Minicluster is down 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,620 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,621 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished. 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@41e1e210 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread. 2016-12-02 15:29:28,622 INFO [Thread-2] regionserver.ShutdownHook$ShutdownHookThread(134): Shutdown hook finished.