Skip to content search   log in ENABLE AUTO REFRESH Jenkins HBase-0.95-Hadoop-2 #635 HBase - Server Test Results Test Results org.apache.hadoop.hbase.replication TestReplicationQueueFailoverCompressed queueFailover  Back to Project  Status  Changes  Console Output  View Build Information  History  Executed Mojos  Test Result  Redeploy Artifacts  See Fingerprints  Previous Build  Next Build Regression org.apache.hadoop.hbase.replication.TestReplicationQueueFailoverCompressed.queueFailover Failing for the past 1 build (Since #635 ) Took 0.25 sec. Error Message Failed after attempts=6, exceptions: Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:45 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:46 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused Tue Jul 16 17:14:56 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused Stacktrace org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=6, exceptions: Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:44 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:45 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 Tue Jul 16 17:14:46 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused Tue Jul 16 17:14:56 UTC 2013, org.apache.hadoop.hbase.client.ScannerCallable@2dbb97d4, java.net.ConnectException: Connection refused at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 13 more Standard Output Formatting using clusterid: testClusterID Formatting using clusterid: testClusterID Standard Error 2013-07-16 17:14:01,953 WARN [pool-1-thread-1] conf.Configuration(817): hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2013-07-16 17:14:02,152 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(358): Created new mini-cluster data directory: /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb 2013-07-16 17:14:02,369 INFO [pool-1-thread-1] zookeeper.MiniZooKeeperCluster(197): Started MiniZK Cluster and connect 1 ZK server on client port: 62127 2013-07-16 17:14:02,421 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=cluster1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:02,453 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): cluster1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:02,457 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): cluster1-0x13fe879789b0000 connected 2013-07-16 17:14:02,720 WARN [pool-1-thread-1] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2013-07-16 17:14:02,912 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3a633d51 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:02,914 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3a633d51 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:02,916 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3a633d51-0x13fe879789b0001 connected 2013-07-16 17:14:02,916 INFO [pool-1-thread-1] client.ZooKeeperRegistry(85): ClusterId read in ZooKeeper is null 2013-07-16 17:14:02,916 DEBUG [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(591): clusterid came back null, using default default-cluster 2013-07-16 17:14:02,946 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=Replication Admin connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:02,950 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): Replication Admin Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:02,951 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): Replication Admin-0x13fe879789b0002 connected 2013-07-16 17:14:03,071 INFO [pool-1-thread-1] replication.TestReplicationBase(100): Setup first Zk 2013-07-16 17:14:03,138 WARN [pool-1-thread-1] conf.Configuration(817): mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir 2013-07-16 17:14:03,144 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=cluster2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:03,148 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): cluster2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:03,149 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): cluster2-0x13fe879789b0003 connected 2013-07-16 17:14:03,185 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(462): Node /1/replication/peers already exists and this is not a retry 2013-07-16 17:14:03,199 INFO [pool-1-thread-1] replication.TestReplicationBase(115): Setup second Zk 2013-07-16 17:14:03,265 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(801): Starting up minicluster with 1 master(s) and 2 regionserver(s) and 2 datanode(s) 2013-07-16 17:14:03,266 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting test.cache.data to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/cache_data in system properties and HBase conf 2013-07-16 17:14:03,267 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting hadoop.tmp.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_tmp in system properties and HBase conf 2013-07-16 17:14:03,267 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting hadoop.log.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_logs in system properties and HBase conf 2013-07-16 17:14:03,268 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting mapred.local.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/mapred_local in system properties and HBase conf 2013-07-16 17:14:03,269 WARN [pool-1-thread-1] conf.Configuration(817): mapred.temp.dir is deprecated. Instead, use mapreduce.cluster.temp.dir 2013-07-16 17:14:03,269 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting mapred.temp.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/mapred_temp in system properties and HBase conf 2013-07-16 17:14:03,270 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(571): read short circuit is ON for user ec2-user 2013-07-16 17:14:03,380 DEBUG [pool-1-thread-1] fs.HFileSystem(213): The file system is not a DistributedFileSystem. Skipping on block location reordering 2013-07-16 17:14:03,386 WARN [pool-1-thread-1] conf.Configuration(817): mapred.system.dir is deprecated. Instead, use mapreduce.jobtracker.system.dir 2013-07-16 17:14:03,387 WARN [pool-1-thread-1] conf.Configuration(817): mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 2013-07-16 17:14:03,550 WARN [pool-1-thread-1] conf.Configuration(817): hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping 2013-07-16 17:14:04,045 WARN [pool-1-thread-1] impl.MetricsConfig(124): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2013-07-16 17:14:04,541 INFO [pool-1-thread-1] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-07-16 17:14:04,630 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:04,666 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_54070_hdfs____phosl0/webapp 2013-07-16 17:14:04,878 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:54070 2013-07-16 17:14:04,952 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:04,956 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_33846_datanode____vaodvl/webapp 2013-07-16 17:14:05,061 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:33846 2013-07-16 17:14:05,212 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:05,220 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_33646_datanode____.ikfqyp/webapp 2013-07-16 17:14:05,328 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:33646 2013-07-16 17:14:05,945 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(1584): BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1190717763-10.197.55.49-39475-1373994845766, infoPort=33646, ipcPort=60688, storageInfo=lv=-40;cid=testClusterID;nsid=2064414120;c=0), blocks: 0, processing time: 1 msecs 2013-07-16 17:14:05,947 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(1584): BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-858037074-10.197.55.49-39876-1373994845766, infoPort=33846, ipcPort=54155, storageInfo=lv=-40;cid=testClusterID;nsid=2064414120;c=0), blocks: 0, processing time: 0 msecs 2013-07-16 17:14:06,041 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:06,184 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3135317556251063294_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:06,189 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3135317556251063294_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:06,194 DEBUG [pool-1-thread-1] util.FSUtils(629): Created version file at hdfs://localhost:43175/user/ec2-user/hbase with version=7 2013-07-16 17:14:06,233 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): master/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=350 2013-07-16 17:14:06,500 INFO [pool-1-thread-1] master.HMaster(421): hbase.rootdir=hdfs://localhost:43175/user/ec2-user/hbase, hbase.cluster.distributed=false 2013-07-16 17:14:06,507 WARN [pool-1-thread-1] conf.Configuration(817): mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 2013-07-16 17:14:06,510 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=master:50904 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:06,514 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:06,515 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(462): Node /1 already exists and this is not a retry 2013-07-16 17:14:06,515 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): master:50904-0x13fe879789b0004 connected 2013-07-16 17:14:06,590 WARN [pool-1-thread-1] conf.Configuration(817): dfs.data.dir is deprecated. Instead, use dfs.datanode.data.dir 2013-07-16 17:14:06,590 WARN [pool-1-thread-1] conf.Configuration(817): dfs.name.dir is deprecated. Instead, use dfs.namenode.name.dir 2013-07-16 17:14:06,590 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:06,591 WARN [pool-1-thread-1] conf.Configuration(817): fs.checkpoint.dir is deprecated. Instead, use dfs.namenode.checkpoint.dir 2013-07-16 17:14:06,592 WARN [pool-1-thread-1] conf.Configuration(817): dfs.http.address is deprecated. Instead, use dfs.namenode.http-address 2013-07-16 17:14:06,593 WARN [pool-1-thread-1] conf.Configuration(817): dfs.safemode.extension is deprecated. Instead, use dfs.namenode.safemode.extension 2013-07-16 17:14:06,593 WARN [pool-1-thread-1] conf.Configuration(817): mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 2013-07-16 17:14:06,594 WARN [pool-1-thread-1] conf.Configuration(817): topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl 2013-07-16 17:14:06,637 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): regionserver/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=350 2013-07-16 17:14:06,744 INFO [pool-1-thread-1] hfile.CacheConfig(407): Allocating LruBlockCache with maximum size 675.6 M 2013-07-16 17:14:06,778 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:06,782 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): regionserver/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=350 2013-07-16 17:14:06,799 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKUtil(433): master:50904-0x13fe879789b0004 Set watcher on znode that does not yet exist, /1/master 2013-07-16 17:14:06,802 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKUtil(433): master:50904-0x13fe879789b0004 Set watcher on znode that does not yet exist, /1/running 2013-07-16 17:14:06,811 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/master 2013-07-16 17:14:06,814 WARN [M:0;ip-10-197-55-49:50904] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:06,814 INFO [M:0;ip-10-197-55-49:50904] master.ActiveMasterManager(170): Registered Active Master=ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499 2013-07-16 17:14:06,816 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50904-0x13fe879789b0004 Set watcher on existing znode=/1/master 2013-07-16 17:14:06,817 DEBUG [pool-1-thread-1-EventThread] master.ActiveMasterManager(119): A master is now available 2013-07-16 17:14:06,826 INFO [M:0;ip-10-197-55-49:50904] master.SplitLogManager(201): timeout=120000, unassigned timeout=180000 2013-07-16 17:14:06,827 INFO [M:0;ip-10-197-55-49:50904] master.SplitLogManager(210): distributedLogReplay = false 2013-07-16 17:14:06,829 INFO [M:0;ip-10-197-55-49:50904] master.SplitLogManager(1082): Found 0 orphan tasks and 0 rescan nodes 2013-07-16 17:14:06,898 INFO [pool-1-thread-1] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;ip-10-197-55-49:49041 2013-07-16 17:14:06,899 INFO [pool-1-thread-1] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:1;ip-10-197-55-49:49955 2013-07-16 17:14:06,909 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:49041 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:06,913 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:49955 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:06,922 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49955 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:06,926 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(384): regionserver:49955-0x13fe879789b0005 connected 2013-07-16 17:14:06,926 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:06,927 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(431): regionserver:49955-0x13fe879789b0005 Set watcher on existing znode=/1/master 2013-07-16 17:14:06,928 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(384): regionserver:49041-0x13fe879789b0006 connected 2013-07-16 17:14:06,930 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/master 2013-07-16 17:14:06,936 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(433): regionserver:49955-0x13fe879789b0005 Set watcher on znode that does not yet exist, /1/running 2013-07-16 17:14:06,937 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(433): regionserver:49041-0x13fe879789b0006 Set watcher on znode that does not yet exist, /1/running 2013-07-16 17:14:06,954 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_172227311159727447_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:06,956 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_172227311159727447_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:06,988 DEBUG [M:0;ip-10-197-55-49:50904] util.FSUtils(758): Created cluster ID file at hdfs://localhost:43175/user/ec2-user/hbase/hbase.id with ID: 9bb659c2-f860-4340-b5f5-0571795e3364 2013-07-16 17:14:07,031 INFO [M:0;ip-10-197-55-49:50904] master.MasterFileSystem(556): BOOTSTRAP: creating META region 2013-07-16 17:14:07,032 INFO [M:0;ip-10-197-55-49:50904] regionserver.HRegion(4031): creating HRegion .META. HTD == '.META.', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '8192', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false', BLOCKCACHE => 'false'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase Table name == .META. 2013-07-16 17:14:07,059 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3720497931401349309_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:07,060 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3720497931401349309_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:07,072 INFO [M:0;ip-10-197-55-49:50904] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:07,108 INFO [M:0;ip-10-197-55-49:50904] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.META./1028785192/.logs/hlog.1373994847080 2013-07-16 17:14:07,124 DEBUG [M:0;ip-10-197-55-49:50904] regionserver.HRegion(534): Instantiated .META.,,1.1028785192 2013-07-16 17:14:07,161 INFO [StoreOpener-1028785192/.META.-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:07,174 INFO [StoreOpener-1028785192/.META.-1] util.ChecksumType$2(68): Checksum using org.apache.hadoop.util.PureJavaCrc32 2013-07-16 17:14:07,174 INFO [StoreOpener-1028785192/.META.-1] util.ChecksumType$3(111): Checksum can use org.apache.hadoop.util.PureJavaCrc32C 2013-07-16 17:14:07,186 INFO [M:0;ip-10-197-55-49:50904] regionserver.HRegion(629): Onlined 1028785192/.META.; next sequenceid=1 2013-07-16 17:14:07,186 DEBUG [M:0;ip-10-197-55-49:50904] regionserver.HRegion(965): Closing .META.,,1.1028785192: disabling compactions & flushes 2013-07-16 17:14:07,187 DEBUG [M:0;ip-10-197-55-49:50904] regionserver.HRegion(987): Updates disabled for region .META.,,1.1028785192 2013-07-16 17:14:07,189 INFO [StoreCloserThread-.META.,,1.1028785192-1] regionserver.HStore(661): Closed info 2013-07-16 17:14:07,189 INFO [M:0;ip-10-197-55-49:50904] regionserver.HRegion(1045): Closed .META.,,1.1028785192 2013-07-16 17:14:07,190 INFO [M:0;ip-10-197-55-49:50904.logSyncer] wal.FSHLog$LogSyncer(966): M:0;ip-10-197-55-49:50904.logSyncer exiting 2013-07-16 17:14:07,190 DEBUG [M:0;ip-10-197-55-49:50904] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:43175/user/ec2-user/hbase/.META./1028785192/.logs 2013-07-16 17:14:07,201 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4465301343630841354_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:07,203 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4465301343630841354_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:07,215 DEBUG [M:0;ip-10-197-55-49:50904] wal.FSHLog(768): Moved 1 WAL file(s) to /user/ec2-user/hbase/.META./1028785192/.oldlogs 2013-07-16 17:14:07,250 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2304519134894331010_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:07,252 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2304519134894331010_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:07,280 INFO [M:0;ip-10-197-55-49:50904] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:07,293 INFO [M:0;ip-10-197-55-49:50904] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x105b3e5d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:07,298 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x105b3e5d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:07,300 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x105b3e5d-0x13fe879789b0007 connected 2013-07-16 17:14:07,307 DEBUG [M:0;ip-10-197-55-49:50904] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@767946a2 2013-07-16 17:14:07,308 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKUtil(433): master:50904-0x13fe879789b0004 Set watcher on znode that does not yet exist, /1/meta-region-server 2013-07-16 17:14:07,355 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKUtil(433): master:50904-0x13fe879789b0004 Set watcher on znode that does not yet exist, /1/balancer 2013-07-16 17:14:07,389 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2013-07-16 17:14:07,389 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49955-0x13fe879789b0005 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2013-07-16 17:14:07,390 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/running 2013-07-16 17:14:07,395 INFO [M:0;ip-10-197-55-49:50904] master.HMaster(654): Server active/primary master=ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499, sessionid=0x13fe879789b0004, setting cluster-up flag (Was=false) 2013-07-16 17:14:07,396 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x61136da6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:07,400 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x61136da6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:07,401 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x61136da6-0x13fe879789b0008 connected 2013-07-16 17:14:07,402 DEBUG [RS:0;ip-10-197-55-49:49041] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@1c8697ce 2013-07-16 17:14:07,406 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x87dedad connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:07,407 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(433): regionserver:49041-0x13fe879789b0006 Set watcher on znode that does not yet exist, /1/meta-region-server 2013-07-16 17:14:07,410 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x87dedad Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:07,411 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x87dedad-0x13fe879789b0009 connected 2013-07-16 17:14:07,413 DEBUG [RS:1;ip-10-197-55-49:49955] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@208c5a4f 2013-07-16 17:14:07,413 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(698): ClusterId : 9bb659c2-f860-4340-b5f5-0571795e3364 2013-07-16 17:14:07,415 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(433): regionserver:49955-0x13fe879789b0005 Set watcher on znode that does not yet exist, /1/meta-region-server 2013-07-16 17:14:07,418 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(698): ClusterId : 9bb659c2-f860-4340-b5f5-0571795e3364 2013-07-16 17:14:07,425 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot already exists and this is not a retry 2013-07-16 17:14:07,426 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot already exists and this is not a retry 2013-07-16 17:14:07,428 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/acquired already exists and this is not a retry 2013-07-16 17:14:07,429 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/acquired already exists and this is not a retry 2013-07-16 17:14:07,436 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/reached already exists and this is not a retry 2013-07-16 17:14:07,437 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/reached already exists and this is not a retry 2013-07-16 17:14:07,441 INFO [M:0;ip-10-197-55-49:50904] procedure.ZKProcedureUtil(258): Clearing all procedure znodes: /1/online-snapshot/acquired /1/online-snapshot/reached /1/online-snapshot/abort 2013-07-16 17:14:07,442 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/abort already exists and this is not a retry 2013-07-16 17:14:07,442 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/online-snapshot/abort already exists and this is not a retry 2013-07-16 17:14:07,447 INFO [RS:1;ip-10-197-55-49:49955] regionserver.MemStoreFlusher(117): globalMemStoreLimit=675.6 M, globalMemStoreLimitLowMark=641.8 M, maxHeap=1.6 G 2013-07-16 17:14:07,447 INFO [RS:0;ip-10-197-55-49:49041] regionserver.MemStoreFlusher(117): globalMemStoreLimit=675.6 M, globalMemStoreLimitLowMark=641.8 M, maxHeap=1.6 G 2013-07-16 17:14:07,450 DEBUG [M:0;ip-10-197-55-49:50904] procedure.ZKProcedureCoordinatorRpcs(194): Starting the controller for procedure member:ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499 2013-07-16 17:14:07,453 WARN [M:0;ip-10-197-55-49:50904] snapshot.SnapshotManager(269): Couldn't delete working snapshot directory: hdfs://localhost:43175/user/ec2-user/hbase/.hbase-snapshot/.tmp 2013-07-16 17:14:07,453 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer$CompactionChecker(1323): CompactionChecker runs every 0sec 2013-07-16 17:14:07,453 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer$CompactionChecker(1323): CompactionChecker runs every 0sec 2013-07-16 17:14:07,460 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=MASTER_OPEN_REGION-ip-10-197-55-49:50904, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:07,460 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=MASTER_CLOSE_REGION-ip-10-197-55-49:50904, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:07,461 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:07,461 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=MASTER_META_SERVER_OPERATIONS-ip-10-197-55-49:50904, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:07,461 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=M_LOG_REPLAY_OPS-ip-10-197-55-49:50904, corePoolSize=10, maxPoolSize=10 2013-07-16 17:14:07,462 DEBUG [M:0;ip-10-197-55-49:50904] executor.ExecutorService(99): Starting executor service name=MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:07,462 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499 with port=49041, startcode=1373994846736 2013-07-16 17:14:07,463 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499 with port=49955, startcode=1373994846790 2013-07-16 17:14:07,465 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2013-07-16 17:14:07,468 INFO [M:0;ip-10-197-55-49:50904] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:07,481 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(307): replicationLogCleaner Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:07,483 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(384): replicationLogCleaner-0x13fe879789b000a connected 2013-07-16 17:14:07,485 DEBUG [M:0;ip-10-197-55-49:50904] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: null 2013-07-16 17:14:07,485 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2013-07-16 17:14:07,489 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner 2013-07-16 17:14:07,491 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2013-07-16 17:14:07,493 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2013-07-16 17:14:07,494 DEBUG [M:0;ip-10-197-55-49:50904] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2013-07-16 17:14:07,495 INFO [M:0;ip-10-197-55-49:50904] master.ServerManager(800): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 2, maximum of 2, timeout of 4500 ms, interval of 1500 ms. 2013-07-16 17:14:08,066 INFO [RpcServer.handler=1,port=50904] master.ServerManager(367): Registering server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,066 INFO [RpcServer.handler=0,port=50904] master.ServerManager(367): Registering server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,080 DEBUG [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1168): Config from master: hbase.rootdir=hdfs://localhost:43175/user/ec2-user/hbase 2013-07-16 17:14:08,080 DEBUG [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1168): Config from master: hbase.rootdir=hdfs://localhost:43175/user/ec2-user/hbase 2013-07-16 17:14:08,080 DEBUG [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1168): Config from master: fs.default.name=hdfs://localhost:43175 2013-07-16 17:14:08,081 DEBUG [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1168): Config from master: fs.default.name=hdfs://localhost:43175 2013-07-16 17:14:08,084 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2013-07-16 17:14:08,084 WARN [RS:1;ip-10-197-55-49:49955] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:08,085 WARN [RS:0;ip-10-197-55-49:49041] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:08,088 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50904-0x13fe879789b0004 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,089 INFO [RS:1;ip-10-197-55-49:49955] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:08,090 DEBUG [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1420): logdir=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,090 INFO [RS:0;ip-10-197-55-49:49041] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:08,090 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50904-0x13fe879789b0004 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,093 DEBUG [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1420): logdir=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,097 INFO [M:0;ip-10-197-55-49:50904] master.ServerManager(817): Finished waiting for region servers count to settle; checked in 2, slept for 602 ms, expecting minimum of 2, maximum of 2, master is running. 2013-07-16 17:14:08,109 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/replication/peers already exists and this is not a retry 2013-07-16 17:14:08,110 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/replication/peers already exists and this is not a retry 2013-07-16 17:14:08,121 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: localhost:62127:/2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,123 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(120): Process identifier=connection to cluster: localhost:62127:/2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,127 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): connection to cluster: localhost:62127:/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,128 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): connection to cluster: localhost:62127:/2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,128 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(384): connection to cluster: localhost:62127:/2-0x13fe879789b000b connected 2013-07-16 17:14:08,129 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/replication/peers/2/peer-state 2013-07-16 17:14:08,130 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(384): connection to cluster: localhost:62127:/2-0x13fe879789b000c connected 2013-07-16 17:14:08,130 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(431): regionserver:49955-0x13fe879789b0005 Set watcher on existing znode=/1/replication/peers/2/peer-state 2013-07-16 17:14:08,131 INFO [RS:0;ip-10-197-55-49:49041] replication.ReplicationPeersZKImpl(152): Added new peer cluster 2 2013-07-16 17:14:08,132 INFO [RS:1;ip-10-197-55-49:49955] replication.ReplicationPeersZKImpl(152): Added new peer cluster 2 2013-07-16 17:14:08,134 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(462): Node /1/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 already exists and this is not a retry 2013-07-16 17:14:08,134 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(462): Node /1/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 already exists and this is not a retry 2013-07-16 17:14:08,141 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/replication/peers/2 2013-07-16 17:14:08,141 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(431): regionserver:49955-0x13fe879789b0005 Set watcher on existing znode=/1/replication/peers/2 2013-07-16 17:14:08,141 DEBUG [RS:0;ip-10-197-55-49:49041] regionserver.Replication(122): ReplicationStatisticsThread 5 2013-07-16 17:14:08,142 DEBUG [RS:1;ip-10-197-55-49:49955] regionserver.Replication(122): ReplicationStatisticsThread 5 2013-07-16 17:14:08,143 INFO [RS:0;ip-10-197-55-49:49041] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:08,143 INFO [RS:1;ip-10-197-55-49:49955] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:08,166 INFO [RS:0;ip-10-197-55-49:49041] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 2013-07-16 17:14:08,168 INFO [RS:1;ip-10-197-55-49:49955] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 2013-07-16 17:14:08,179 DEBUG [RS:0;ip-10-197-55-49:49041] executor.ExecutorService(99): Starting executor service name=RS_OPEN_REGION-ip-10-197-55-49:49041, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:08,180 DEBUG [RS:0;ip-10-197-55-49:49041] executor.ExecutorService(99): Starting executor service name=RS_OPEN_META-ip-10-197-55-49:49041, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:08,180 DEBUG [RS:1;ip-10-197-55-49:49955] executor.ExecutorService(99): Starting executor service name=RS_OPEN_REGION-ip-10-197-55-49:49955, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:08,180 DEBUG [RS:0;ip-10-197-55-49:49041] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_REGION-ip-10-197-55-49:49041, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:08,181 DEBUG [RS:1;ip-10-197-55-49:49955] executor.ExecutorService(99): Starting executor service name=RS_OPEN_META-ip-10-197-55-49:49955, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:08,181 DEBUG [RS:0;ip-10-197-55-49:49041] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_META-ip-10-197-55-49:49041, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:08,181 DEBUG [RS:1;ip-10-197-55-49:49955] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_REGION-ip-10-197-55-49:49955, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:08,181 DEBUG [RS:1;ip-10-197-55-49:49955] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_META-ip-10-197-55-49:49955, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:08,236 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,236 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 1 2013-07-16 17:14:08,239 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,240 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 1 2013-07-16 17:14:08,241 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,243 DEBUG [RS:0;ip-10-197-55-49:49041] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,243 INFO [RS:0;ip-10-197-55-49:49041] regionserver.ReplicationSourceManager(184): Current list of replicators: [ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] other RSs: [ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] 2013-07-16 17:14:08,248 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(431): regionserver:49955-0x13fe879789b0005 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,250 DEBUG [RS:1;ip-10-197-55-49:49955] zookeeper.ZKUtil(431): regionserver:49955-0x13fe879789b0005 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,250 INFO [RS:1;ip-10-197-55-49:49955] regionserver.ReplicationSourceManager(184): Current list of replicators: [ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] other RSs: [ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] 2013-07-16 17:14:08,271 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.df.interval is deprecated. Instead, use fs.df.interval 2013-07-16 17:14:08,271 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.tracker.http.address is deprecated. Instead, use mapreduce.tasktracker.http.address 2013-07-16 17:14:08,272 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.max.objects is deprecated. Instead, use dfs.namenode.max.objects 2013-07-16 17:14:08,272 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.userlog.retain.hours is deprecated. Instead, use mapreduce.job.userlog.retain.hours 2013-07-16 17:14:08,272 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.local.dir.minspacestart is deprecated. Instead, use mapreduce.tasktracker.local.dir.minspacestart 2013-07-16 17:14:08,273 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.shuffle.read.timeout is deprecated. Instead, use mapreduce.reduce.shuffle.read.timeout 2013-07-16 17:14:08,273 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): io.sort.spill.percent is deprecated. Instead, use mapreduce.map.sort.spill.percent 2013-07-16 17:14:08,273 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.reduce.parallel.copies is deprecated. Instead, use mapreduce.reduce.shuffle.parallelcopies 2013-07-16 17:14:08,273 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication 2013-07-16 17:14:08,274 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.local.dir.minspacekill is deprecated. Instead, use mapreduce.tasktracker.local.dir.minspacekill 2013-07-16 17:14:08,274 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.profile is deprecated. Instead, use mapreduce.task.profile 2013-07-16 17:14:08,274 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.heartbeats.in.second is deprecated. Instead, use mapreduce.jobtracker.heartbeats.in.second 2013-07-16 17:14:08,274 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress 2013-07-16 17:14:08,275 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.healthChecker.interval is deprecated. Instead, use mapreduce.tasktracker.healthchecker.interval 2013-07-16 17:14:08,275 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout 2013-07-16 17:14:08,275 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): jobclient.completion.poll.interval is deprecated. Instead, use mapreduce.client.completion.pollinterval 2013-07-16 17:14:08,275 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.tracker.persist.jobstatus.active is deprecated. Instead, use mapreduce.jobtracker.persist.jobstatus.active 2013-07-16 17:14:08,276 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.output.compression.codec is deprecated. Instead, use mapreduce.output.fileoutputformat.compress.codec 2013-07-16 17:14:08,276 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.shuffle.merge.percent is deprecated. Instead, use mapreduce.reduce.shuffle.merge.percent 2013-07-16 17:14:08,276 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.map.max.attempts is deprecated. Instead, use mapreduce.map.maxattempts 2013-07-16 17:14:08,276 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.reduce.input.buffer.percent is deprecated. Instead, use mapreduce.reduce.input.buffer.percent 2013-07-16 17:14:08,277 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.cache.levels is deprecated. Instead, use mapreduce.jobtracker.taskcache.levels 2013-07-16 17:14:08,277 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor 2013-07-16 17:14:08,277 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.jobtracker.instrumentation is deprecated. Instead, use mapreduce.jobtracker.instrumentation 2013-07-16 17:14:08,277 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.userlog.limit.kb is deprecated. Instead, use mapreduce.task.userlog.limit.kb 2013-07-16 17:14:08,278 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:08,278 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.speculative.execution.slowNodeThreshold is deprecated. Instead, use mapreduce.job.speculative.slownodethreshold 2013-07-16 17:14:08,278 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.skip.map.max.skip.records is deprecated. Instead, use mapreduce.map.skip.maxrecords 2013-07-16 17:14:08,278 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.block.size is deprecated. Instead, use dfs.blocksize 2013-07-16 17:14:08,279 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.access.time.precision is deprecated. Instead, use dfs.namenode.accesstime.precision 2013-07-16 17:14:08,279 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker.jobhistory.lru.cache.size is deprecated. Instead, use mapreduce.jobtracker.jobhistory.lru.cache.size 2013-07-16 17:14:08,279 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.tracker.persist.jobstatus.hours is deprecated. Instead, use mapreduce.jobtracker.persist.jobstatus.hours 2013-07-16 17:14:08,279 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker.handler.count is deprecated. Instead, use mapreduce.jobtracker.handler.count 2013-07-16 17:14:08,280 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent 2013-07-16 17:14:08,280 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb 2013-07-16 17:14:08,280 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.profile.maps is deprecated. Instead, use mapreduce.task.profile.maps 2013-07-16 17:14:08,280 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2013-07-16 17:14:08,281 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min 2013-07-16 17:14:08,281 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 2013-07-16 17:14:08,281 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 2013-07-16 17:14:08,281 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.name.edits.dir is deprecated. Instead, use dfs.namenode.edits.dir 2013-07-16 17:14:08,282 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.replication.considerLoad is deprecated. Instead, use dfs.namenode.replication.considerLoad 2013-07-16 17:14:08,282 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.dns.nameserver is deprecated. Instead, use mapreduce.tasktracker.dns.nameserver 2013-07-16 17:14:08,282 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.tasktracker.taskmemorymanager.monitoring-interval is deprecated. Instead, use mapreduce.tasktracker.taskmemorymanager.monitoringinterval 2013-07-16 17:14:08,282 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.expiry.interval is deprecated. Instead, use mapreduce.jobtracker.expire.trackers.interval 2013-07-16 17:14:08,282 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.balance.bandwidthPerSec is deprecated. Instead, use dfs.datanode.balance.bandwidthPerSec 2013-07-16 17:14:08,283 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.max.tracker.failures is deprecated. Instead, use mapreduce.job.maxtaskfailures.per.tracker 2013-07-16 17:14:08,283 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapreduce.jobtracker.split.metainfo.maxsize is deprecated. Instead, use mapreduce.job.split.metainfo.maxsize 2013-07-16 17:14:08,283 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker.persist.jobstatus.dir is deprecated. Instead, use mapreduce.jobtracker.persist.jobstatus.dir 2013-07-16 17:14:08,284 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): job.end.retry.attempts is deprecated. Instead, use mapreduce.job.end-notification.retry.attempts 2013-07-16 17:14:08,284 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2013-07-16 17:14:08,284 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.safemode.threshold.pct is deprecated. Instead, use dfs.namenode.safemode.threshold-pct 2013-07-16 17:14:08,284 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:08,285 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.task.tracker.task-controller is deprecated. Instead, use mapreduce.tasktracker.taskcontroller 2013-07-16 17:14:08,285 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.jobtracker.maxtasks.per.job is deprecated. Instead, use mapreduce.jobtracker.maxtasks.perjob 2013-07-16 17:14:08,285 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.reduce.child.log.level is deprecated. Instead, use mapreduce.reduce.log.level 2013-07-16 17:14:08,285 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.reduce.max.attempts is deprecated. Instead, use mapreduce.reduce.maxattempts 2013-07-16 17:14:08,286 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.map.output.compression.codec is deprecated. Instead, use mapreduce.map.output.compress.codec 2013-07-16 17:14:08,286 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.shuffle.input.buffer.percent is deprecated. Instead, use mapreduce.reduce.shuffle.input.buffer.percent 2013-07-16 17:14:08,286 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.task.tracker.report.address is deprecated. Instead, use mapreduce.tasktracker.report.address 2013-07-16 17:14:08,286 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): keep.failed.task.files is deprecated. Instead, use mapreduce.task.files.preserve.failedtasks 2013-07-16 17:14:08,287 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.name.dir.restore is deprecated. Instead, use dfs.namenode.name.dir.restore 2013-07-16 17:14:08,287 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.https.client.keystore.resource is deprecated. Instead, use dfs.client.https.keystore.resource 2013-07-16 17:14:08,287 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): tasktracker.http.threads is deprecated. Instead, use mapreduce.tasktracker.http.threads 2013-07-16 17:14:08,287 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.speculative.execution.slowTaskThreshold is deprecated. Instead, use mapreduce.job.speculative.slowtaskthreshold 2013-07-16 17:14:08,287 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.backup.address is deprecated. Instead, use dfs.namenode.backup.address 2013-07-16 17:14:08,288 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.backup.http.address is deprecated. Instead, use dfs.namenode.backup.http-address 2013-07-16 17:14:08,288 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.acls.enabled is deprecated. Instead, use mapreduce.cluster.acls.enabled 2013-07-16 17:14:08,288 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.max.tracker.blacklists is deprecated. Instead, use mapreduce.jobtracker.tasktracker.maxblacklists 2013-07-16 17:14:08,288 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.indexcache.mb is deprecated. Instead, use mapreduce.tasktracker.indexcache.mb 2013-07-16 17:14:08,288 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.skip.attempts.to.start.skipping is deprecated. Instead, use mapreduce.task.skip.start.attempts 2013-07-16 17:14:08,289 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.reduce.tasks.maximum is deprecated. Instead, use mapreduce.tasktracker.reduce.tasks.maximum 2013-07-16 17:14:08,289 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): jobclient.output.filter is deprecated. Instead, use mapreduce.client.output.filter 2013-07-16 17:14:08,289 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.permissions is deprecated. Instead, use dfs.permissions.enabled 2013-07-16 17:14:08,289 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.jobtracker.restart.recover is deprecated. Instead, use mapreduce.jobtracker.restart.recover 2013-07-16 17:14:08,290 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2013-07-16 17:14:08,290 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.speculative.execution.speculativeCap is deprecated. Instead, use mapreduce.job.speculative.speculativecap 2013-07-16 17:14:08,290 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): jobclient.progress.monitor.poll.interval is deprecated. Instead, use mapreduce.client.progressmonitor.pollinterval 2013-07-16 17:14:08,290 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.datanode.max.xcievers is deprecated. Instead, use dfs.datanode.max.transfer.threads 2013-07-16 17:14:08,290 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.map.child.log.level is deprecated. Instead, use mapreduce.map.log.level 2013-07-16 17:14:08,291 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.output.compression.type is deprecated. Instead, use mapreduce.output.fileoutputformat.compress.type 2013-07-16 17:14:08,291 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker.retiredjobs.cache.size is deprecated. Instead, use mapreduce.jobtracker.retiredjobs.cache.size 2013-07-16 17:14:08,291 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.https.need.client.auth is deprecated. Instead, use dfs.client.https.need-auth 2013-07-16 17:14:08,291 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.dns.interface is deprecated. Instead, use mapreduce.tasktracker.dns.interface 2013-07-16 17:14:08,292 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.task.profile.reduces is deprecated. Instead, use mapreduce.task.profile.reduces 2013-07-16 17:14:08,292 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.https.address is deprecated. Instead, use dfs.namenode.https-address 2013-07-16 17:14:08,292 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): job.end.retry.interval is deprecated. Instead, use mapreduce.job.end-notification.retry.interval 2013-07-16 17:14:08,292 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.jobtracker.job.history.block.size is deprecated. Instead, use mapreduce.jobtracker.jobhistory.block.size 2013-07-16 17:14:08,293 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.child.tmp is deprecated. Instead, use mapreduce.task.tmp.dir 2013-07-16 17:14:08,293 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.replication.interval is deprecated. Instead, use dfs.namenode.replication.interval 2013-07-16 17:14:08,293 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2013-07-16 17:14:08,293 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.map.tasks.maximum is deprecated. Instead, use mapreduce.tasktracker.map.tasks.maximum 2013-07-16 17:14:08,293 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed 2013-07-16 17:14:08,294 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): fs.checkpoint.edits.dir is deprecated. Instead, use dfs.namenode.checkpoint.edits.dir 2013-07-16 17:14:08,294 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename 2013-07-16 17:14:08,294 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.write.packet.size is deprecated. Instead, use dfs.client-write-packet-size 2013-07-16 17:14:08,294 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.jobtracker.taskScheduler is deprecated. Instead, use mapreduce.jobtracker.taskscheduler 2013-07-16 17:14:08,295 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.skip.reduce.max.skip.groups is deprecated. Instead, use mapreduce.reduce.skip.maxgroups 2013-07-16 17:14:08,295 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.permissions.supergroup is deprecated. Instead, use dfs.permissions.superusergroup 2013-07-16 17:14:08,295 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address 2013-07-16 17:14:08,295 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.healthChecker.script.timeout is deprecated. Instead, use mapreduce.tasktracker.healthchecker.script.timeout 2013-07-16 17:14:08,295 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.instrumentation is deprecated. Instead, use mapreduce.tasktracker.instrumentation 2013-07-16 17:14:08,296 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.job.reuse.jvm.num.tasks is deprecated. Instead, use mapreduce.job.jvm.numtasks 2013-07-16 17:14:08,296 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.inmem.merge.threshold is deprecated. Instead, use mapreduce.reduce.merge.inmem.threshold 2013-07-16 17:14:08,296 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): topology.script.number.args is deprecated. Instead, use net.topology.script.number.args 2013-07-16 17:14:08,296 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.reduce.slowstart.completed.maps is deprecated. Instead, use mapreduce.job.reduce.slowstart.completedmaps 2013-07-16 17:14:08,296 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode 2013-07-16 17:14:08,297 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): dfs.secondary.http.address is deprecated. Instead, use dfs.namenode.secondary.http-address 2013-07-16 17:14:08,297 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): fs.checkpoint.period is deprecated. Instead, use dfs.namenode.checkpoint.period 2013-07-16 17:14:08,297 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.tasktracker.tasks.sleeptime-before-sigkill is deprecated. Instead, use mapreduce.tasktracker.tasks.sleeptimebeforesigkill 2013-07-16 17:14:08,297 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.compress.map.output is deprecated. Instead, use mapreduce.map.output.compress 2013-07-16 17:14:08,298 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): mapred.merge.recordsBeforeProgress is deprecated. Instead, use mapreduce.task.merge.progress.records 2013-07-16 17:14:08,298 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapred.shuffle.connect.timeout is deprecated. Instead, use mapreduce.reduce.shuffle.connect.timeout 2013-07-16 17:14:08,298 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:08,302 INFO [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2ce07e6b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,304 INFO [RS:0;ip-10-197-55-49:49041] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2ce07e6b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,307 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2ce07e6b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,309 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2ce07e6b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,309 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2ce07e6b-0x13fe879789b000d connected 2013-07-16 17:14:08,311 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2ce07e6b-0x13fe879789b000e connected 2013-07-16 17:14:08,389 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,389 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 2 2013-07-16 17:14:08,392 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,392 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 2 2013-07-16 17:14:08,404 WARN [RS:1;ip-10-197-55-49:49955] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:08,405 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:08,408 WARN [RS:0;ip-10-197-55-49:49041] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:08,413 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1199): Serving as ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, RpcServer on ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:49041, sessionid=0x13fe879789b0006 2013-07-16 17:14:08,414 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1199): Serving as ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, RpcServer on ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:49955, sessionid=0x13fe879789b0005 2013-07-16 17:14:08,414 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(170): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 starting 2013-07-16 17:14:08,415 DEBUG [RS:1;ip-10-197-55-49:49955] snapshot.RegionServerSnapshotManager(140): Start Snapshot Manager ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:08,415 DEBUG [RS:0;ip-10-197-55-49:49041] snapshot.RegionServerSnapshotManager(140): Start Snapshot Manager ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:08,414 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.SplitLogWorker(170): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 starting 2013-07-16 17:14:08,416 DEBUG [RS:0;ip-10-197-55-49:49041] procedure.ZKProcedureMemberRpcs(339): Starting procedure member 'null' 2013-07-16 17:14:08,415 DEBUG [RS:1;ip-10-197-55-49:49955] procedure.ZKProcedureMemberRpcs(339): Starting procedure member 'null' 2013-07-16 17:14:08,417 DEBUG [RS:1;ip-10-197-55-49:49955] procedure.ZKProcedureMemberRpcs(138): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2013-07-16 17:14:08,417 DEBUG [RS:0;ip-10-197-55-49:49041] procedure.ZKProcedureMemberRpcs(138): Checking for aborted procedures on node: '/1/online-snapshot/abort' 2013-07-16 17:14:08,419 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x360c7f06 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,421 DEBUG [RS:1;ip-10-197-55-49:49955] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2013-07-16 17:14:08,421 DEBUG [RS:0;ip-10-197-55-49:49041] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/1/online-snapshot/acquired' 2013-07-16 17:14:08,423 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x360c7f06 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,424 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x360c7f06-0x13fe879789b000f connected 2013-07-16 17:14:08,431 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cb0f24a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:08,435 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4cb0f24a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:08,436 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4cb0f24a-0x13fe879789b0010 connected 2013-07-16 17:14:08,590 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,591 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 3 2013-07-16 17:14:08,594 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,594 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 3 2013-07-16 17:14:08,892 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,892 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 4 2013-07-16 17:14:08,895 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:08,895 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 4 2013-07-16 17:14:09,138 INFO [M:0;ip-10-197-55-49:50904] zookeeper.MetaRegionTracker(164): Unsetting META region location in ZooKeeper 2013-07-16 17:14:09,141 WARN [M:0;ip-10-197-55-49:50904] zookeeper.RecoverableZooKeeper(163): Node /1/meta-region-server already deleted, retry=false 2013-07-16 17:14:09,143 DEBUG [M:0;ip-10-197-55-49:50904] master.AssignmentManager(2126): No previous transition plan was found (or we are ignoring an existing plan) for .META.,,1.1028785192 so generated a random one; hri=.META.,,1.1028785192, src=, dest=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; 2 (online=2, available=2) available servers, forceNewPlan=false 2013-07-16 17:14:09,144 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKAssign(208): master:50904-0x13fe879789b0004 Creating (or updating) unassigned node for 1028785192 with OFFLINE state 2013-07-16 17:14:09,157 DEBUG [M:0;ip-10-197-55-49:50904] master.AssignmentManager(1835): Setting table .META. to ENABLED state. 2013-07-16 17:14:09,169 INFO [M:0;ip-10-197-55-49:50904] master.AssignmentManager(1854): Assigning region .META.,,1.1028785192 to ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,169 INFO [M:0;ip-10-197-55-49:50904] master.RegionStates(265): Transitioned from {1028785192/.META. state=OFFLINE, ts=1373994849143, server=null} to {1028785192/.META. state=PENDING_OPEN, ts=1373994849169, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,170 DEBUG [M:0;ip-10-197-55-49:50904] master.ServerManager(735): New admin connection to ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,180 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open .META.,,1.1028785192 2013-07-16 17:14:09,185 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 1028785192/.META. from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:09,187 INFO [M:0;ip-10-197-55-49:50904] master.ServerManager(555): AssignmentManager hasn't finished failover cleanup 2013-07-16 17:14:09,191 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/1028785192 2013-07-16 17:14:09,191 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 1028785192 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:09,192 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.HRegionServer(1439): logdir=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,193 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:09,195 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=1028785192/.META., current state from region state map ={1028785192/.META. state=PENDING_OPEN, ts=1373994849169, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,196 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {1028785192/.META. state=PENDING_OPEN, ts=1373994849169, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {1028785192/.META. state=OPENING, ts=1373994849196, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,205 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994849198.meta 2013-07-16 17:14:09,206 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,221 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] coprocessor.CoprocessorHost(180): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2013-07-16 17:14:09,231 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.HRegion(5160): Registered coprocessor service: region=.META.,,1 service=MultiRowMutationService 2013-07-16 17:14:09,234 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.RegionCoprocessorHost(197): Load coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of .META. successfully. 2013-07-16 17:14:09,239 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table .META. 1028785192 2013-07-16 17:14:09,240 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated .META.,,1.1028785192 2013-07-16 17:14:09,249 INFO [StoreOpener-1028785192/.META.-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:09,256 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 1028785192/.META.; next sequenceid=1 2013-07-16 17:14:09,257 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 1028785192/.META. 2013-07-16 17:14:09,260 INFO [PostOpenDeployTasks:1028785192] regionserver.HRegionServer(1703): Post open deploy tasks for region=.META.,,1.1028785192 2013-07-16 17:14:09,261 INFO [PostOpenDeployTasks:1028785192] zookeeper.MetaRegionTracker(123): Setting META region location in ZooKeeper as ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,263 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/meta-region-server 2013-07-16 17:14:09,264 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49955-0x13fe879789b0005 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/meta-region-server 2013-07-16 17:14:09,264 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/1/meta-region-server 2013-07-16 17:14:09,269 INFO [PostOpenDeployTasks:1028785192] regionserver.HRegionServer(1728): Done with post open deploy task for region=.META.,,1.1028785192 2013-07-16 17:14:09,269 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 1028785192/.META. from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:09,273 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/1028785192 2013-07-16 17:14:09,273 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 1028785192 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:09,274 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,274 DEBUG [RS_OPEN_META-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened .META.,,1.1028785192 on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,275 DEBUG [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=1028785192/.META., current state from region state map ={1028785192/.META. state=OPENING, ts=1373994849196, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,275 INFO [AM.ZK.Worker-pool-2-thread-2] master.RegionStates(265): Transitioned from {1028785192/.META. state=OPENING, ts=1373994849196, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {1028785192/.META. state=OPEN, ts=1373994849275, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,277 INFO [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(143): Handling OPENED event for 1028785192/.META. from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:09,278 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 1028785192 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:09,282 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/1028785192 2013-07-16 17:14:09,283 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 1028785192 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:09,283 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1218): The znode of .META.,,1.1028785192 has been deleted, region state: {1028785192/.META. state=OPEN, ts=1373994849275, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:09,283 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(301): Onlined 1028785192/.META. on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,284 INFO [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1223): The master has opened .META.,,1.1028785192 that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,285 INFO [M:0;ip-10-197-55-49:50904] master.HMaster(973): .META. assigned=1, rit=false, location=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:09,294 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:09,294 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 5 2013-07-16 17:14:09,297 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:09,297 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 5 2013-07-16 17:14:09,345 DEBUG [M:0;ip-10-197-55-49:50904] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,345 INFO [M:0;ip-10-197-55-49:50904] catalog.MetaMigrationConvertingToPB(166): .META. doesn't have any entries to update. 2013-07-16 17:14:09,345 INFO [M:0;ip-10-197-55-49:50904] catalog.MetaMigrationConvertingToPB(132): META already up-to date with PB serialization 2013-07-16 17:14:09,361 DEBUG [M:0;ip-10-197-55-49:50904] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,362 INFO [M:0;ip-10-197-55-49:50904] master.AssignmentManager(463): Clean cluster startup. Assigning userregions 2013-07-16 17:14:09,363 DEBUG [M:0;ip-10-197-55-49:50904] zookeeper.ZKAssign(452): master:50904-0x13fe879789b0004 Deleting any existing unassigned nodes 2013-07-16 17:14:09,378 DEBUG [M:0;ip-10-197-55-49:50904] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,394 INFO [M:0;ip-10-197-55-49:50904] master.HMaster(872): Master has completed initialization 2013-07-16 17:14:09,407 DEBUG [CatalogJanitor-ip-10-197-55-49:50904] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,414 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:09,430 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(871): Minicluster is up 2013-07-16 17:14:09,430 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(801): Starting up minicluster with 1 master(s) and 2 regionserver(s) and 2 datanode(s) 2013-07-16 17:14:09,430 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(302): System.getProperty("hadoop.log.dir") already set to: /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_logs so I do NOT create it in target/test-data/f2763e32-fe49-4988-ac94-eeca82431821 2013-07-16 17:14:09,451 WARN [pool-1-thread-1] hbase.HBaseTestingUtility(306): hadoop.log.dir property value differs in configuration and system: Configuration=/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop-log-dir while System=/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_logs Erasing configuration value by system value. 2013-07-16 17:14:09,451 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(302): System.getProperty("hadoop.tmp.dir") already set to: /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_tmp so I do NOT create it in target/test-data/f2763e32-fe49-4988-ac94-eeca82431821 2013-07-16 17:14:09,452 WARN [pool-1-thread-1] hbase.HBaseTestingUtility(306): hadoop.tmp.dir property value differs in configuration and system: Configuration=/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop-tmp-dir while System=/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/hadoop_tmp Erasing configuration value by system value. 2013-07-16 17:14:09,452 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(358): Created new mini-cluster data directory: /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7 2013-07-16 17:14:09,453 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting test.cache.data to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/cache_data in system properties and HBase conf 2013-07-16 17:14:09,453 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting hadoop.tmp.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/hadoop_tmp in system properties and HBase conf 2013-07-16 17:14:09,454 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting hadoop.log.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/hadoop_logs in system properties and HBase conf 2013-07-16 17:14:09,454 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting mapred.local.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/mapred_local in system properties and HBase conf 2013-07-16 17:14:09,454 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(588): Setting mapred.temp.dir to /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/mapred_temp in system properties and HBase conf 2013-07-16 17:14:09,454 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(571): read short circuit is ON for user ec2-user 2013-07-16 17:14:09,455 DEBUG [pool-1-thread-1] fs.HFileSystem(213): The file system is not a DistributedFileSystem. Skipping on block location reordering 2013-07-16 17:14:09,609 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:09,613 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_52011_hdfs____atv6t9/webapp 2013-07-16 17:14:09,701 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:52011 2013-07-16 17:14:09,759 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:09,763 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_38693_datanode____.h01qha/webapp 2013-07-16 17:14:09,796 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:09,796 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 6 2013-07-16 17:14:09,799 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:09,799 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 6 2013-07-16 17:14:09,850 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:38693 2013-07-16 17:14:09,929 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26 2013-07-16 17:14:09,934 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/home/ec2-user/jenkins/maven-repositories/0/org/apache/hadoop/hadoop-hdfs/2.0.5-alpha/hadoop-hdfs-2.0.5-alpha-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_49995_datanode____.9039ux/webapp 2013-07-16 17:14:09,965 INFO [IPC Server handler 3 on 56710] blockmanagement.BlockManager(1584): BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-800074225-10.197.55.49-51438-1373994849885, infoPort=38693, ipcPort=49060, storageInfo=lv=-40;cid=testClusterID;nsid=347311166;c=0), blocks: 0, processing time: 1 msecs 2013-07-16 17:14:10,030 INFO [pool-1-thread-1] log.Slf4jLog(67): Started SelectChannelConnector@localhost:49995 2013-07-16 17:14:10,133 INFO [IPC Server handler 8 on 56710] blockmanagement.BlockManager(1584): BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-248467811-10.197.55.49-47006-1373994850069, infoPort=49995, ipcPort=35081, storageInfo=lv=-40;cid=testClusterID;nsid=347311166;c=0), blocks: 0, processing time: 0 msecs 2013-07-16 17:14:10,171 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:10,198 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-1224594832267841260_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,201 INFO [IPC Server handler 7 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-1224594832267841260_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,203 DEBUG [pool-1-thread-1] util.FSUtils(629): Created version file at hdfs://localhost:56710/user/ec2-user/hbase with version=7 2013-07-16 17:14:10,205 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): master/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=60 2013-07-16 17:14:10,212 INFO [pool-1-thread-1] master.HMaster(421): hbase.rootdir=hdfs://localhost:56710/user/ec2-user/hbase, hbase.cluster.distributed=false 2013-07-16 17:14:10,214 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=master:50669 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,219 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,220 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(462): Node /2 already exists and this is not a retry 2013-07-16 17:14:10,226 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): master:50669-0x13fe879789b0011 connected 2013-07-16 17:14:10,264 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:10,266 WARN [pool-1-thread-1] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:10,268 WARN [pool-1-thread-1] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:10,269 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): regionserver/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=60 2013-07-16 17:14:10,301 WARN [pool-1-thread-1] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:10,302 WARN [pool-1-thread-1] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:10,304 WARN [pool-1-thread-1] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:10,305 DEBUG [pool-1-thread-1] client.HConnectionManager(2466): regionserver/ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:0 HConnection server-to-server retries=60 2013-07-16 17:14:10,317 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKUtil(433): master:50669-0x13fe879789b0011 Set watcher on znode that does not yet exist, /2/master 2013-07-16 17:14:10,319 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKUtil(433): master:50669-0x13fe879789b0011 Set watcher on znode that does not yet exist, /2/running 2013-07-16 17:14:10,322 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/master 2013-07-16 17:14:10,326 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50669-0x13fe879789b0011 Set watcher on existing znode=/2/master 2013-07-16 17:14:10,326 DEBUG [pool-1-thread-1-EventThread] master.ActiveMasterManager(119): A master is now available 2013-07-16 17:14:10,328 WARN [M:0;ip-10-197-55-49:50669] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:10,328 INFO [M:0;ip-10-197-55-49:50669] master.ActiveMasterManager(170): Registered Active Master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 2013-07-16 17:14:10,329 INFO [M:0;ip-10-197-55-49:50669] master.SplitLogManager(201): timeout=120000, unassigned timeout=180000 2013-07-16 17:14:10,329 INFO [M:0;ip-10-197-55-49:50669] master.SplitLogManager(210): distributedLogReplay = false 2013-07-16 17:14:10,334 INFO [M:0;ip-10-197-55-49:50669] master.SplitLogManager(1082): Found 0 orphan tasks and 0 rescan nodes 2013-07-16 17:14:10,398 INFO [IPC Server handler 9 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_9142975573575931356_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,398 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:10,399 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 7 2013-07-16 17:14:10,402 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:10,402 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 7 2013-07-16 17:14:10,407 INFO [IPC Server handler 0 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_9142975573575931356_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,410 DEBUG [M:0;ip-10-197-55-49:50669] util.FSUtils(758): Created cluster ID file at hdfs://localhost:56710/user/ec2-user/hbase/hbase.id with ID: 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:10,415 INFO [pool-1-thread-1] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;ip-10-197-55-49:55133 2013-07-16 17:14:10,424 INFO [pool-1-thread-1] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:1;ip-10-197-55-49:39939 2013-07-16 17:14:10,446 INFO [M:0;ip-10-197-55-49:50669] master.MasterFileSystem(556): BOOTSTRAP: creating META region 2013-07-16 17:14:10,449 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:55133 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,447 INFO [M:0;ip-10-197-55-49:50669] regionserver.HRegion(4031): creating HRegion .META. HTD == '.META.', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '8192', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false', BLOCKCACHE => 'false'} RootDir = hdfs://localhost:56710/user/ec2-user/hbase Table name == .META. 2013-07-16 17:14:10,464 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(120): Process identifier=regionserver:39939 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,466 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,473 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(384): regionserver:55133-0x13fe879789b0012 connected 2013-07-16 17:14:10,473 DEBUG [RS:0;ip-10-197-55-49:55133] zookeeper.ZKUtil(431): regionserver:55133-0x13fe879789b0012 Set watcher on existing znode=/2/master 2013-07-16 17:14:10,476 DEBUG [RS:0;ip-10-197-55-49:55133] zookeeper.ZKUtil(433): regionserver:55133-0x13fe879789b0012 Set watcher on znode that does not yet exist, /2/running 2013-07-16 17:14:10,486 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:39939 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,486 DEBUG [RS:1;ip-10-197-55-49:39939] zookeeper.ZKUtil(431): regionserver:39939 Set watcher on existing znode=/2/master 2013-07-16 17:14:10,487 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(384): regionserver:39939-0x13fe879789b0013 connected 2013-07-16 17:14:10,491 DEBUG [RS:1;ip-10-197-55-49:39939] zookeeper.ZKUtil(433): regionserver:39939-0x13fe879789b0013 Set watcher on znode that does not yet exist, /2/running 2013-07-16 17:14:10,499 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_4068722374610736872_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:10,500 INFO [IPC Server handler 3 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_4068722374610736872_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:10,504 INFO [M:0;ip-10-197-55-49:50669] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:10,519 INFO [M:0;ip-10-197-55-49:50669] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.META./1028785192/.logs/hlog.1373994850514 2013-07-16 17:14:10,521 DEBUG [M:0;ip-10-197-55-49:50669] regionserver.HRegion(534): Instantiated .META.,,1.1028785192 2013-07-16 17:14:10,530 INFO [StoreOpener-1028785192/.META.-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:10,539 INFO [M:0;ip-10-197-55-49:50669] regionserver.HRegion(629): Onlined 1028785192/.META.; next sequenceid=1 2013-07-16 17:14:10,539 DEBUG [M:0;ip-10-197-55-49:50669] regionserver.HRegion(965): Closing .META.,,1.1028785192: disabling compactions & flushes 2013-07-16 17:14:10,539 DEBUG [M:0;ip-10-197-55-49:50669] regionserver.HRegion(987): Updates disabled for region .META.,,1.1028785192 2013-07-16 17:14:10,540 INFO [StoreCloserThread-.META.,,1.1028785192-1] regionserver.HStore(661): Closed info 2013-07-16 17:14:10,540 INFO [M:0;ip-10-197-55-49:50669] regionserver.HRegion(1045): Closed .META.,,1.1028785192 2013-07-16 17:14:10,540 INFO [M:0;ip-10-197-55-49:50669.logSyncer] wal.FSHLog$LogSyncer(966): M:0;ip-10-197-55-49:50669.logSyncer exiting 2013-07-16 17:14:10,541 DEBUG [M:0;ip-10-197-55-49:50669] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:56710/user/ec2-user/hbase/.META./1028785192/.logs 2013-07-16 17:14:10,568 INFO [IPC Server handler 1 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-3981129754661527065_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:10,570 INFO [IPC Server handler 3 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-3981129754661527065_1008 size 15 2013-07-16 17:14:10,581 DEBUG [M:0;ip-10-197-55-49:50669] wal.FSHLog(768): Moved 1 WAL file(s) to /user/ec2-user/hbase/.META./1028785192/.oldlogs 2013-07-16 17:14:10,608 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_8221585979102772371_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,609 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_8221585979102772371_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:10,622 INFO [M:0;ip-10-197-55-49:50669] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:10,629 INFO [M:0;ip-10-197-55-49:50669] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x673ceab3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,632 DEBUG [M:0;ip-10-197-55-49:50669-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x673ceab3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,634 DEBUG [M:0;ip-10-197-55-49:50669-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x673ceab3-0x13fe879789b0014 connected 2013-07-16 17:14:10,639 DEBUG [M:0;ip-10-197-55-49:50669] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@13a4071a 2013-07-16 17:14:10,641 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKUtil(433): master:50669-0x13fe879789b0011 Set watcher on znode that does not yet exist, /2/meta-region-server 2013-07-16 17:14:10,642 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKUtil(433): master:50669-0x13fe879789b0011 Set watcher on znode that does not yet exist, /2/balancer 2013-07-16 17:14:10,649 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:39939-0x13fe879789b0013 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2013-07-16 17:14:10,649 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2013-07-16 17:14:10,649 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/running 2013-07-16 17:14:10,654 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2d76343e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,658 INFO [M:0;ip-10-197-55-49:50669] master.HMaster(654): Server active/primary master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211, sessionid=0x13fe879789b0011, setting cluster-up flag (Was=false) 2013-07-16 17:14:10,659 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2d76343e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,660 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2d76343e-0x13fe879789b0015 connected 2013-07-16 17:14:10,661 DEBUG [RS:0;ip-10-197-55-49:55133] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@73f378c8 2013-07-16 17:14:10,663 DEBUG [RS:0;ip-10-197-55-49:55133] zookeeper.ZKUtil(433): regionserver:55133-0x13fe879789b0012 Set watcher on znode that does not yet exist, /2/meta-region-server 2013-07-16 17:14:10,666 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(698): ClusterId : 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:10,678 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x757ecdf0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,683 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(462): Node /2/online-snapshot/acquired already exists and this is not a retry 2013-07-16 17:14:10,683 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x757ecdf0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,685 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x757ecdf0-0x13fe879789b0016 connected 2013-07-16 17:14:10,688 INFO [M:0;ip-10-197-55-49:50669] zookeeper.RecoverableZooKeeper(462): Node /2/online-snapshot/reached already exists and this is not a retry 2013-07-16 17:14:10,692 INFO [M:0;ip-10-197-55-49:50669] procedure.ZKProcedureUtil(258): Clearing all procedure znodes: /2/online-snapshot/acquired /2/online-snapshot/reached /2/online-snapshot/abort 2013-07-16 17:14:10,702 DEBUG [RS:1;ip-10-197-55-49:39939] catalog.CatalogTracker(192): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@1c344a47 2013-07-16 17:14:10,703 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(462): Node /2/online-snapshot/abort already exists and this is not a retry 2013-07-16 17:14:10,704 INFO [RS:0;ip-10-197-55-49:55133] regionserver.MemStoreFlusher(117): globalMemStoreLimit=675.6 M, globalMemStoreLimitLowMark=641.8 M, maxHeap=1.6 G 2013-07-16 17:14:10,704 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer$CompactionChecker(1323): CompactionChecker runs every 0sec 2013-07-16 17:14:10,705 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 with port=55133, startcode=1373994850276 2013-07-16 17:14:10,713 DEBUG [RS:1;ip-10-197-55-49:39939] zookeeper.ZKUtil(433): regionserver:39939-0x13fe879789b0013 Set watcher on znode that does not yet exist, /2/meta-region-server 2013-07-16 17:14:10,715 DEBUG [M:0;ip-10-197-55-49:50669] procedure.ZKProcedureCoordinatorRpcs(194): Starting the controller for procedure member:ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 2013-07-16 17:14:10,716 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1951): Master is not running yet 2013-07-16 17:14:10,716 WARN [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(793): reportForDuty failed; sleeping and then retrying. 2013-07-16 17:14:10,717 WARN [M:0;ip-10-197-55-49:50669] snapshot.SnapshotManager(269): Couldn't delete working snapshot directory: hdfs://localhost:56710/user/ec2-user/hbase/.hbase-snapshot/.tmp 2013-07-16 17:14:10,717 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=MASTER_OPEN_REGION-ip-10-197-55-49:50669, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:10,718 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=MASTER_CLOSE_REGION-ip-10-197-55-49:50669, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:10,718 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(698): ClusterId : 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:10,718 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:10,718 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=MASTER_META_SERVER_OPERATIONS-ip-10-197-55-49:50669, corePoolSize=5, maxPoolSize=5 2013-07-16 17:14:10,719 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=M_LOG_REPLAY_OPS-ip-10-197-55-49:50669, corePoolSize=10, maxPoolSize=10 2013-07-16 17:14:10,719 DEBUG [M:0;ip-10-197-55-49:50669] executor.ExecutorService(99): Starting executor service name=MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:10,719 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2013-07-16 17:14:10,721 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(462): Node /2/online-snapshot/acquired already exists and this is not a retry 2013-07-16 17:14:10,721 INFO [M:0;ip-10-197-55-49:50669] zookeeper.RecoverableZooKeeper(120): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:10,726 DEBUG [M:0;ip-10-197-55-49:50669-EventThread] zookeeper.ZooKeeperWatcher(307): replicationLogCleaner Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:10,727 INFO [RS:1;ip-10-197-55-49:39939] regionserver.MemStoreFlusher(117): globalMemStoreLimit=675.6 M, globalMemStoreLimitLowMark=641.8 M, maxHeap=1.6 G 2013-07-16 17:14:10,727 DEBUG [M:0;ip-10-197-55-49:50669-EventThread] zookeeper.ZooKeeperWatcher(384): replicationLogCleaner-0x13fe879789b0017 connected 2013-07-16 17:14:10,727 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer$CompactionChecker(1323): CompactionChecker runs every 0sec 2013-07-16 17:14:10,728 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 with port=39939, startcode=1373994850314 2013-07-16 17:14:10,733 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1951): Master is not running yet 2013-07-16 17:14:10,734 WARN [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(793): reportForDuty failed; sleeping and then retrying. 2013-07-16 17:14:10,734 DEBUG [M:0;ip-10-197-55-49:50669] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: null 2013-07-16 17:14:10,734 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2013-07-16 17:14:10,735 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner 2013-07-16 17:14:10,736 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2013-07-16 17:14:10,736 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2013-07-16 17:14:10,737 DEBUG [M:0;ip-10-197-55-49:50669] cleaner.CleanerChore(86): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2013-07-16 17:14:10,737 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(800): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 2, maximum of 2, timeout of 4500 ms, interval of 1500 ms. 2013-07-16 17:14:11,101 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:11,101 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 8 2013-07-16 17:14:11,104 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:14:11,104 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Waiting for peers, sleeping 100 times 8 2013-07-16 17:14:11,717 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 with port=55133, startcode=1373994850276 2013-07-16 17:14:11,718 INFO [RpcServer.handler=1,port=50669] master.ServerManager(367): Registering server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,720 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1168): Config from master: hbase.rootdir=hdfs://localhost:56710/user/ec2-user/hbase 2013-07-16 17:14:11,720 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1168): Config from master: fs.default.name=hdfs://localhost:56710 2013-07-16 17:14:11,723 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:14:11,723 WARN [RS:0;ip-10-197-55-49:55133] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:11,726 INFO [RS:0;ip-10-197-55-49:55133] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:11,727 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1420): logdir=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,727 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50669-0x13fe879789b0011 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,734 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1935): reportForDuty to master=ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 with port=39939, startcode=1373994850314 2013-07-16 17:14:11,736 INFO [RpcServer.handler=3,port=50669] master.ServerManager(367): Registering server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,737 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1168): Config from master: hbase.rootdir=hdfs://localhost:56710/user/ec2-user/hbase 2013-07-16 17:14:11,737 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1168): Config from master: fs.default.name=hdfs://localhost:56710 2013-07-16 17:14:11,739 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:14:11,739 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(817): Finished waiting for region servers count to settle; checked in 2, slept for 1002 ms, expecting minimum of 2, maximum of 2, master is running. 2013-07-16 17:14:11,739 WARN [RS:1;ip-10-197-55-49:39939] hbase.ZNodeClearer(57): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2013-07-16 17:14:11,740 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(462): Node /2/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 already exists and this is not a retry 2013-07-16 17:14:11,742 INFO [RS:1;ip-10-197-55-49:39939] fs.HFileSystem(244): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2013-07-16 17:14:11,743 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1420): logdir=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,745 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.Replication(122): ReplicationStatisticsThread 5 2013-07-16 17:14:11,745 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50669-0x13fe879789b0011 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,746 INFO [RS:0;ip-10-197-55-49:55133] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:11,748 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50669-0x13fe879789b0011 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,761 INFO [RS:0;ip-10-197-55-49:55133] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276/ip-10-197-55-49.us-west-1.compute.internal%2C55133%2C1373994850276.1373994851753 2013-07-16 17:14:11,766 DEBUG [RS:0;ip-10-197-55-49:55133] executor.ExecutorService(99): Starting executor service name=RS_OPEN_REGION-ip-10-197-55-49:55133, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:11,767 DEBUG [RS:0;ip-10-197-55-49:55133] executor.ExecutorService(99): Starting executor service name=RS_OPEN_META-ip-10-197-55-49:55133, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:11,767 DEBUG [RS:0;ip-10-197-55-49:55133] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_REGION-ip-10-197-55-49:55133, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:11,767 DEBUG [RS:0;ip-10-197-55-49:55133] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_META-ip-10-197-55-49:55133, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:11,768 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(462): Node /2/replication/peers already exists and this is not a retry 2013-07-16 17:14:11,772 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(462): Node /2/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 already exists and this is not a retry 2013-07-16 17:14:11,774 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.Replication(122): ReplicationStatisticsThread 5 2013-07-16 17:14:11,775 INFO [RS:1;ip-10-197-55-49:39939] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:11,778 DEBUG [RS:0;ip-10-197-55-49:55133] zookeeper.ZKUtil(431): regionserver:55133-0x13fe879789b0012 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,780 DEBUG [RS:0;ip-10-197-55-49:55133] zookeeper.ZKUtil(431): regionserver:55133-0x13fe879789b0012 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,780 INFO [RS:0;ip-10-197-55-49:55133] regionserver.ReplicationSourceManager(184): Current list of replicators: [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] other RSs: [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] 2013-07-16 17:14:11,884 INFO [RS:1;ip-10-197-55-49:39939] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 2013-07-16 17:14:11,887 DEBUG [RS:1;ip-10-197-55-49:39939] executor.ExecutorService(99): Starting executor service name=RS_OPEN_REGION-ip-10-197-55-49:39939, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:11,887 DEBUG [RS:1;ip-10-197-55-49:39939] executor.ExecutorService(99): Starting executor service name=RS_OPEN_META-ip-10-197-55-49:39939, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:11,887 DEBUG [RS:1;ip-10-197-55-49:39939] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_REGION-ip-10-197-55-49:39939, corePoolSize=3, maxPoolSize=3 2013-07-16 17:14:11,888 DEBUG [RS:1;ip-10-197-55-49:39939] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_META-ip-10-197-55-49:39939, corePoolSize=1, maxPoolSize=1 2013-07-16 17:14:11,892 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:11,893 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:11,894 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:11,895 DEBUG [RS:1;ip-10-197-55-49:39939] zookeeper.ZKUtil(431): regionserver:39939-0x13fe879789b0013 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,896 INFO [RS:0;ip-10-197-55-49:55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4ed95bc3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:11,896 DEBUG [RS:1;ip-10-197-55-49:39939] zookeeper.ZKUtil(431): regionserver:39939-0x13fe879789b0013 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,896 INFO [RS:1;ip-10-197-55-49:39939] regionserver.ReplicationSourceManager(184): Current list of replicators: [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] other RSs: [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] 2013-07-16 17:14:11,901 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4ed95bc3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:11,902 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4ed95bc3-0x13fe879789b0018 connected 2013-07-16 17:14:11,903 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:14:11,903 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,912 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:14:11,913 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,914 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(250): Replicating 9bb659c2-f860-4340-b5f5-0571795e3364 -> 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:11,917 INFO [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(250): Replicating 9bb659c2-f860-4340-b5f5-0571795e3364 -> 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:11,921 WARN [RS:1;ip-10-197-55-49:39939] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:11,922 WARN [RS:1;ip-10-197-55-49:39939] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:11,922 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:11,923 WARN [RS:1;ip-10-197-55-49:39939] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:11,925 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:11,929 INFO [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4896b555 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:11,933 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4896b555 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:11,936 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4896b555-0x13fe879789b0019 connected 2013-07-16 17:14:11,950 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:11,950 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 1 2013-07-16 17:14:11,955 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:11,955 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 1 2013-07-16 17:14:11,963 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): fs.default.name is deprecated. Instead, use fs.defaultFS 2013-07-16 17:14:11,963 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:11,964 WARN [RS:0;ip-10-197-55-49:55133] conf.Configuration(817): io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2013-07-16 17:14:11,965 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1199): Serving as ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, RpcServer on ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133, sessionid=0x13fe879789b0012 2013-07-16 17:14:11,965 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(170): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 starting 2013-07-16 17:14:11,965 DEBUG [RS:0;ip-10-197-55-49:55133] snapshot.RegionServerSnapshotManager(140): Start Snapshot Manager ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:11,969 DEBUG [RS:0;ip-10-197-55-49:55133] procedure.ZKProcedureMemberRpcs(339): Starting procedure member 'null' 2013-07-16 17:14:11,969 DEBUG [RS:0;ip-10-197-55-49:55133] procedure.ZKProcedureMemberRpcs(138): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2013-07-16 17:14:11,971 DEBUG [RS:0;ip-10-197-55-49:55133] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2013-07-16 17:14:11,977 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7565a43b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:11,982 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7565a43b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:11,984 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7565a43b-0x13fe879789b001a connected 2013-07-16 17:14:11,992 WARN [RS:1;ip-10-197-55-49:39939] conf.Configuration(817): mapreduce.job.counters.limit is deprecated. Instead, use mapreduce.job.counters.max 2013-07-16 17:14:11,993 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1199): Serving as ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, RpcServer on ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939, sessionid=0x13fe879789b0013 2013-07-16 17:14:11,993 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] regionserver.SplitLogWorker(170): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 starting 2013-07-16 17:14:11,993 DEBUG [RS:1;ip-10-197-55-49:39939] snapshot.RegionServerSnapshotManager(140): Start Snapshot Manager ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:11,994 DEBUG [RS:1;ip-10-197-55-49:39939] procedure.ZKProcedureMemberRpcs(339): Starting procedure member 'null' 2013-07-16 17:14:11,994 DEBUG [RS:1;ip-10-197-55-49:39939] procedure.ZKProcedureMemberRpcs(138): Checking for aborted procedures on node: '/2/online-snapshot/abort' 2013-07-16 17:14:11,996 DEBUG [RS:1;ip-10-197-55-49:39939] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/2/online-snapshot/acquired' 2013-07-16 17:14:12,001 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6fb1f800 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:12,004 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6fb1f800 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:12,006 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6fb1f800-0x13fe879789b001b connected 2013-07-16 17:14:12,053 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,057 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,058 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,058 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 2 2013-07-16 17:14:12,062 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,062 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 2 2013-07-16 17:14:12,260 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,264 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,268 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,268 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 3 2013-07-16 17:14:12,269 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,269 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 3 2013-07-16 17:14:12,571 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,571 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,576 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,576 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 4 2013-07-16 17:14:12,576 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,577 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 4 2013-07-16 17:14:12,800 INFO [M:0;ip-10-197-55-49:50669] zookeeper.MetaRegionTracker(164): Unsetting META region location in ZooKeeper 2013-07-16 17:14:12,803 WARN [M:0;ip-10-197-55-49:50669] zookeeper.RecoverableZooKeeper(163): Node /2/meta-region-server already deleted, retry=false 2013-07-16 17:14:12,803 DEBUG [M:0;ip-10-197-55-49:50669] master.AssignmentManager(2126): No previous transition plan was found (or we are ignoring an existing plan) for .META.,,1.1028785192 so generated a random one; hri=.META.,,1.1028785192, src=, dest=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276; 2 (online=2, available=2) available servers, forceNewPlan=false 2013-07-16 17:14:12,803 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKAssign(208): master:50669-0x13fe879789b0011 Creating (or updating) unassigned node for 1028785192 with OFFLINE state 2013-07-16 17:14:12,808 DEBUG [M:0;ip-10-197-55-49:50669] master.AssignmentManager(1835): Setting table .META. to ENABLED state. 2013-07-16 17:14:12,814 INFO [M:0;ip-10-197-55-49:50669] master.AssignmentManager(1854): Assigning region .META.,,1.1028785192 to ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,814 INFO [M:0;ip-10-197-55-49:50669] master.RegionStates(265): Transitioned from {1028785192/.META. state=OFFLINE, ts=1373994852803, server=null} to {1028785192/.META. state=PENDING_OPEN, ts=1373994852814, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,814 DEBUG [M:0;ip-10-197-55-49:50669] master.ServerManager(735): New admin connection to ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,818 INFO [RpcServer.handler=1,port=55133] regionserver.HRegionServer(3455): Open .META.,,1.1028785192 2013-07-16 17:14:12,821 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] zookeeper.ZKAssign(786): regionserver:55133-0x13fe879789b0012 Attempting to transition node 1028785192/.META. from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:12,821 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(555): AssignmentManager hasn't finished failover cleanup 2013-07-16 17:14:12,828 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/region-in-transition/1028785192 2013-07-16 17:14:12,829 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] zookeeper.ZKAssign(862): regionserver:55133-0x13fe879789b0012 Successfully transitioned node 1028785192 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:12,829 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.HRegionServer(1439): logdir=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,830 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0] wal.FSHLog(350): WAL/HLog configuration: blocksize=64 MB, rollsize=19.66 KB, enabled=true, optionallogflushinternal=1000ms 2013-07-16 17:14:12,831 DEBUG [AM.ZK.Worker-pool-13-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, region=1028785192/.META., current state from region state map ={1028785192/.META. state=PENDING_OPEN, ts=1373994852814, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,831 INFO [AM.ZK.Worker-pool-13-thread-1] master.RegionStates(265): Transitioned from {1028785192/.META. state=PENDING_OPEN, ts=1373994852814, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} to {1028785192/.META. state=OPENING, ts=1373994852831, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,840 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0] wal.FSHLog(522): New WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276/ip-10-197-55-49.us-west-1.compute.internal%2C55133%2C1373994850276.1373994852834.meta 2013-07-16 17:14:12,841 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.HRegion(4192): Open {ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,844 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] coprocessor.CoprocessorHost(180): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2013-07-16 17:14:12,844 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.HRegion(5160): Registered coprocessor service: region=.META.,,1 service=MultiRowMutationService 2013-07-16 17:14:12,845 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.RegionCoprocessorHost(197): Load coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of .META. successfully. 2013-07-16 17:14:12,845 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table .META. 1028785192 2013-07-16 17:14:12,846 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.HRegion(534): Instantiated .META.,,1.1028785192 2013-07-16 17:14:12,852 INFO [StoreOpener-1028785192/.META.-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:12,858 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0] regionserver.HRegion(629): Onlined 1028785192/.META.; next sequenceid=1 2013-07-16 17:14:12,858 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] zookeeper.ZKAssign(598): regionserver:55133-0x13fe879789b0012 Attempting to retransition the opening state of node 1028785192/.META. 2013-07-16 17:14:12,860 INFO [PostOpenDeployTasks:1028785192] regionserver.HRegionServer(1703): Post open deploy tasks for region=.META.,,1.1028785192 2013-07-16 17:14:12,861 INFO [PostOpenDeployTasks:1028785192] zookeeper.MetaRegionTracker(123): Setting META region location in ZooKeeper as ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,863 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/meta-region-server 2013-07-16 17:14:12,864 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:39939-0x13fe879789b0013 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/meta-region-server 2013-07-16 17:14:12,864 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/2/meta-region-server 2013-07-16 17:14:12,868 INFO [PostOpenDeployTasks:1028785192] regionserver.HRegionServer(1728): Done with post open deploy task for region=.META.,,1.1028785192 2013-07-16 17:14:12,868 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] zookeeper.ZKAssign(786): regionserver:55133-0x13fe879789b0012 Attempting to transition node 1028785192/.META. from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:12,872 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/region-in-transition/1028785192 2013-07-16 17:14:12,872 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] zookeeper.ZKAssign(862): regionserver:55133-0x13fe879789b0012 Successfully transitioned node 1028785192 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:12,872 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''}, server: ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,873 DEBUG [RS_OPEN_META-ip-10-197-55-49:55133-0] handler.OpenRegionHandler(186): Opened .META.,,1.1028785192 on server:ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,874 DEBUG [AM.ZK.Worker-pool-13-thread-2] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, region=1028785192/.META., current state from region state map ={1028785192/.META. state=OPENING, ts=1373994852831, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,874 INFO [AM.ZK.Worker-pool-13-thread-2] master.RegionStates(265): Transitioned from {1028785192/.META. state=OPENING, ts=1373994852831, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} to {1028785192/.META. state=OPEN, ts=1373994852874, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,875 INFO [MASTER_OPEN_REGION-ip-10-197-55-49:50669-0] handler.OpenedRegionHandler(143): Handling OPENED event for 1028785192/.META. from ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276; deleting unassigned node 2013-07-16 17:14:12,876 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50669-0] zookeeper.ZKAssign(405): master:50669-0x13fe879789b0011 Deleting existing unassigned node for 1028785192 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:12,879 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/region-in-transition/1028785192 2013-07-16 17:14:12,880 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50669-0] zookeeper.ZKAssign(434): master:50669-0x13fe879789b0011 Successfully deleted unassigned node for region 1028785192 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:12,880 DEBUG [AM.ZK.Worker-pool-13-thread-3] master.AssignmentManager$4(1218): The znode of .META.,,1.1028785192 has been deleted, region state: {1028785192/.META. state=OPEN, ts=1373994852874, server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276} 2013-07-16 17:14:12,880 INFO [AM.ZK.Worker-pool-13-thread-3] master.RegionStates(301): Onlined 1028785192/.META. on ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,880 INFO [AM.ZK.Worker-pool-13-thread-3] master.AssignmentManager$4(1223): The master has opened .META.,,1.1028785192 that was online on ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,882 INFO [M:0;ip-10-197-55-49:50669] master.HMaster(973): .META. assigned=1, rit=false, location=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:12,891 DEBUG [M:0;ip-10-197-55-49:50669] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,891 INFO [M:0;ip-10-197-55-49:50669] catalog.MetaMigrationConvertingToPB(166): .META. doesn't have any entries to update. 2013-07-16 17:14:12,891 INFO [M:0;ip-10-197-55-49:50669] catalog.MetaMigrationConvertingToPB(132): META already up-to date with PB serialization 2013-07-16 17:14:12,906 DEBUG [M:0;ip-10-197-55-49:50669] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,908 INFO [M:0;ip-10-197-55-49:50669] master.AssignmentManager(463): Clean cluster startup. Assigning userregions 2013-07-16 17:14:12,908 DEBUG [M:0;ip-10-197-55-49:50669] zookeeper.ZKAssign(452): master:50669-0x13fe879789b0011 Deleting any existing unassigned nodes 2013-07-16 17:14:12,921 DEBUG [M:0;ip-10-197-55-49:50669] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,927 INFO [M:0;ip-10-197-55-49:50669] master.HMaster(872): Master has completed initialization 2013-07-16 17:14:12,934 DEBUG [CatalogJanitor-ip-10-197-55-49:50669] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,978 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,979 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,981 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5f262a85 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:12,985 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5f262a85 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:12,988 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,988 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5f262a85-0x13fe879789b001c connected 2013-07-16 17:14:12,988 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 5 2013-07-16 17:14:12,989 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:12,989 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 5 2013-07-16 17:14:12,997 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:12,997 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b001c 2013-07-16 17:14:13,002 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a46a171 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:13,005 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1a46a171 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:13,006 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1a46a171-0x13fe879789b001d connected 2013-07-16 17:14:13,007 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(871): Minicluster is up 2013-07-16 17:14:13,050 DEBUG [RpcServer.handler=2,port=50904] lock.ZKInterProcessLockBase(226): Acquired a lock for /1/table-lock/test/write-master:509040000000000 2013-07-16 17:14:13,057 DEBUG [RpcServer.handler=2,port=50904] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:13,065 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] handler.CreateTableHandler(146): Create table test 2013-07-16 17:14:13,077 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:13,083 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8415678436664134350_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,084 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8415678436664134350_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,092 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,092 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,092 INFO [RegionOpenAndInitThread-test-3] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,093 INFO [RegionOpenAndInitThread-test-4] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,096 INFO [RegionOpenAndInitThread-test-6] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,097 INFO [RegionOpenAndInitThread-test-5] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,099 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,100 INFO [RegionOpenAndInitThread-test-10] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,101 INFO [RegionOpenAndInitThread-test-8] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,101 INFO [RegionOpenAndInitThread-test-9] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,157 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4518885466176131333_1019{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,162 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4518885466176131333_1019 size 31 2013-07-16 17:14:13,176 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:13,177 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.: disabling compactions & flushes 2013-07-16 17:14:13,177 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:13,177 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:13,178 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,182 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2958697113929900556_1023{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,196 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:13,212 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2958697113929900556_1023 size 34 2013-07-16 17:14:13,213 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(534): Instantiated test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:13,214 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(965): Closing test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.: disabling compactions & flushes 2013-07-16 17:14:13,214 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(987): Updates disabled for region test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:13,214 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(1045): Closed test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:13,214 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,217 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6058103249307897219_1026{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 34 2013-07-16 17:14:13,217 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4945416784324690208_1030{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,225 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5182229263298409134_1032{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 34 2013-07-16 17:14:13,226 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6058103249307897219_1026 size 34 2013-07-16 17:14:13,226 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7043415229329870334_1034{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 34 2013-07-16 17:14:13,226 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:13,227 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_1551510891901991164_1028{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 34 2013-07-16 17:14:13,227 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.: disabling compactions & flushes 2013-07-16 17:14:13,227 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4945416784324690208_1030 size 34 2013-07-16 17:14:13,227 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:13,227 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_581377697335098832_1033{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 34 2013-07-16 17:14:13,227 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:13,228 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,229 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5182229263298409134_1032 size 34 2013-07-16 17:14:13,230 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7043415229329870334_1034 size 34 2013-07-16 17:14:13,230 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_1551510891901991164_1028 size 34 2013-07-16 17:14:13,230 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3256580557453441045_1029{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 34 2013-07-16 17:14:13,230 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_581377697335098832_1033 size 34 2013-07-16 17:14:13,232 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8471173886913480505_1035{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 34 2013-07-16 17:14:13,232 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3256580557453441045_1029 size 34 2013-07-16 17:14:13,234 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8471173886913480505_1035 size 34 2013-07-16 17:14:13,240 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7341445739035316079_1037{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,241 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7341445739035316079_1037{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,245 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:13,245 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.: disabling compactions & flushes 2013-07-16 17:14:13,245 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:13,245 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:13,246 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,254 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5490458536644996703_1039{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,255 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5490458536644996703_1039{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,259 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(534): Instantiated test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:13,259 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(965): Closing test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.: disabling compactions & flushes 2013-07-16 17:14:13,259 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(987): Updates disabled for region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:13,259 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(1045): Closed test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:13,260 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,264 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-1055222535440271468_1041{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,265 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-1055222535440271468_1041{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,269 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:13,269 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.: disabling compactions & flushes 2013-07-16 17:14:13,269 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:13,269 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:13,270 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,274 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_5910854180791987005_1043{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,275 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_5910854180791987005_1043{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,279 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:13,280 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,nnn,1373994853026.093d3ef494905701450f33a487333200.: disabling compactions & flushes 2013-07-16 17:14:13,280 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:13,280 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:13,281 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,286 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8303954706503345317_1045{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,289 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8303954706503345317_1045{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,292 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(534): Instantiated test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:13,293 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(965): Closing test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.: disabling compactions & flushes 2013-07-16 17:14:13,293 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(987): Updates disabled for region test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:13,293 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(1045): Closed test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:13,293 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7818290419098099986_1047{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,294 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,294 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7818290419098099986_1047{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,298 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:13,300 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.: disabling compactions & flushes 2013-07-16 17:14:13,300 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:13,300 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:13,301 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,308 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_8792006010782049725_1049{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,310 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_8792006010782049725_1049{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 at position: N/A 2013-07-16 17:14:13,313 INFO [ip-10-197-55-49:49955Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 at position: N/A 2013-07-16 17:14:13,318 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_147465357578010944_1051{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,320 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_147465357578010944_1051{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,321 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:13,321 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.: disabling compactions & flushes 2013-07-16 17:14:13,321 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:13,321 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:13,322 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-1886164081042592559_1053{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,323 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,324 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(534): Instantiated test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:13,324 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-1886164081042592559_1053 size 34 2013-07-16 17:14:13,324 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(965): Closing test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.: disabling compactions & flushes 2013-07-16 17:14:13,325 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(987): Updates disabled for region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:13,325 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(1045): Closed test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:13,325 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,326 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:13,326 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,sss,1373994853027.287928895932801d51170fb202253eac.: disabling compactions & flushes 2013-07-16 17:14:13,326 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:13,326 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:13,327 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,346 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2923826087956154705_1058{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 34 2013-07-16 17:14:13,347 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2923826087956154705_1058 size 34 2013-07-16 17:14:13,350 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2944842866409542472_1056{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,360 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2944842866409542472_1056{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,367 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_1141808563778368252_1059{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,369 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_1141808563778368252_1059{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,369 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:13,369 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.: disabling compactions & flushes 2013-07-16 17:14:13,369 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:13,370 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:13,370 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,371 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:13,372 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.: disabling compactions & flushes 2013-07-16 17:14:13,372 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:13,372 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:13,373 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,387 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-7870271830380147170_1062{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,389 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-7870271830380147170_1062 size 34 2013-07-16 17:14:13,394 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_2779527507850532652_1063{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,395 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_2779527507850532652_1063{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,396 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:13,396 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.: disabling compactions & flushes 2013-07-16 17:14:13,396 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:13,396 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:13,397 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,398 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:13,398 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.: disabling compactions & flushes 2013-07-16 17:14:13,398 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:13,399 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:13,399 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:43175/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:13,409 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:13,415 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6567602713029641072_1066{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:13,417 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6567602713029641072_1066 size 34 2013-07-16 17:14:13,421 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3673241728438017961_1067{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:13,422 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:13,423 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.: disabling compactions & flushes 2013-07-16 17:14:13,423 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:13,423 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:13,423 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3673241728438017961_1067 size 31 2013-07-16 17:14:13,461 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(534): Instantiated test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:13,461 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(965): Closing test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.: disabling compactions & flushes 2013-07-16 17:14:13,462 DEBUG [RegionOpenAndInitThread-test-7] regionserver.HRegion(987): Updates disabled for region test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:13,462 INFO [RegionOpenAndInitThread-test-7] regionserver.HRegion(1045): Closed test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:13,490 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:13,491 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:13,494 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:13,495 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 6 2013-07-16 17:14:13,496 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:13,496 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 6 2013-07-16 17:14:13,619 DEBUG [RegionOpenAndInitThread-test-10] regionserver.HRegion(534): Instantiated test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:13,619 DEBUG [RegionOpenAndInitThread-test-10] regionserver.HRegion(965): Closing test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.: disabling compactions & flushes 2013-07-16 17:14:13,619 DEBUG [RegionOpenAndInitThread-test-10] regionserver.HRegion(987): Updates disabled for region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:13,620 INFO [RegionOpenAndInitThread-test-10] regionserver.HRegion(1045): Closed test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:13,623 DEBUG [RegionOpenAndInitThread-test-4] regionserver.HRegion(534): Instantiated test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:13,623 DEBUG [RegionOpenAndInitThread-test-4] regionserver.HRegion(965): Closing test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.: disabling compactions & flushes 2013-07-16 17:14:13,624 DEBUG [RegionOpenAndInitThread-test-4] regionserver.HRegion(987): Updates disabled for region test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:13,624 INFO [RegionOpenAndInitThread-test-4] regionserver.HRegion(1045): Closed test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:13,625 DEBUG [RegionOpenAndInitThread-test-3] regionserver.HRegion(534): Instantiated test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:13,625 DEBUG [RegionOpenAndInitThread-test-3] regionserver.HRegion(965): Closing test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.: disabling compactions & flushes 2013-07-16 17:14:13,625 DEBUG [RegionOpenAndInitThread-test-3] regionserver.HRegion(987): Updates disabled for region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:13,625 INFO [RegionOpenAndInitThread-test-3] regionserver.HRegion(1045): Closed test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:13,627 DEBUG [RegionOpenAndInitThread-test-6] regionserver.HRegion(534): Instantiated test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:13,627 DEBUG [RegionOpenAndInitThread-test-6] regionserver.HRegion(965): Closing test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.: disabling compactions & flushes 2013-07-16 17:14:13,627 DEBUG [RegionOpenAndInitThread-test-6] regionserver.HRegion(987): Updates disabled for region test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:13,628 INFO [RegionOpenAndInitThread-test-6] regionserver.HRegion(1045): Closed test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:13,632 DEBUG [RegionOpenAndInitThread-test-8] regionserver.HRegion(534): Instantiated test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:13,632 DEBUG [RegionOpenAndInitThread-test-8] regionserver.HRegion(965): Closing test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.: disabling compactions & flushes 2013-07-16 17:14:13,632 DEBUG [RegionOpenAndInitThread-test-8] regionserver.HRegion(987): Updates disabled for region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:13,632 INFO [RegionOpenAndInitThread-test-8] regionserver.HRegion(1045): Closed test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:13,633 DEBUG [RegionOpenAndInitThread-test-5] regionserver.HRegion(534): Instantiated test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:13,633 DEBUG [RegionOpenAndInitThread-test-5] regionserver.HRegion(965): Closing test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.: disabling compactions & flushes 2013-07-16 17:14:13,634 DEBUG [RegionOpenAndInitThread-test-5] regionserver.HRegion(987): Updates disabled for region test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:13,634 INFO [RegionOpenAndInitThread-test-5] regionserver.HRegion(1045): Closed test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:13,669 DEBUG [RegionOpenAndInitThread-test-9] regionserver.HRegion(534): Instantiated test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:13,669 DEBUG [RegionOpenAndInitThread-test-9] regionserver.HRegion(965): Closing test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.: disabling compactions & flushes 2013-07-16 17:14:13,670 DEBUG [RegionOpenAndInitThread-test-9] regionserver.HRegion(987): Updates disabled for region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:13,670 INFO [RegionOpenAndInitThread-test-9] regionserver.HRegion(1045): Closed test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:13,716 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:13,751 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(534): Instantiated test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:13,751 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(965): Closing test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.: disabling compactions & flushes 2013-07-16 17:14:13,751 DEBUG [RegionOpenAndInitThread-test-2] regionserver.HRegion(987): Updates disabled for region test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:13,751 INFO [RegionOpenAndInitThread-test-2] regionserver.HRegion(1045): Closed test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:13,835 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] catalog.MetaEditor(254): Added 26 regions in META 2013-07-16 17:14:13,843 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(2458): Bulk assigning 26 region(s) across 2 server(s), round-robin=true 2013-07-16 17:14:13,846 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1503): Assigning 13 region(s) to ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,846 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1503): Assigning 13 region(s) to ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,846 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] master.GeneralBulkAssigner(177): Timeout-on-RIT=133000 2013-07-16 17:14:13,851 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 64c33257daeacd0fe5bf6a175319eadb with OFFLINE state 2013-07-16 17:14:13,851 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 8ad63e6b6a48baaedae6985e87d53061 with OFFLINE state 2013-07-16 17:14:13,851 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for d3ed59de1135ee985829ee3cbad0cee2 with OFFLINE state 2013-07-16 17:14:13,851 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 23b3aa990a7ac4e12882f9d3eca30eea with OFFLINE state 2013-07-16 17:14:13,851 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for d29efc5b487c6ba1411a330e6ea9abfc with OFFLINE state 2013-07-16 17:14:13,852 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 093d3ef494905701450f33a487333200 with OFFLINE state 2013-07-16 17:14:13,852 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 8316cb643e8db1f47659c2704a5d85bd with OFFLINE state 2013-07-16 17:14:13,852 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 072118ef6c0d2e55b3a9ef36a82f9fae with OFFLINE state 2013-07-16 17:14:13,853 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for c4611b71a935e3b170cd961ded7d0820 with OFFLINE state 2013-07-16 17:14:13,856 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:13,857 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for c7ae28d709ff479c3e4baad82cd99ca0 with OFFLINE state 2013-07-16 17:14:13,857 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 930e643b6dd6efc74f14deb95249db91 with OFFLINE state 2013-07-16 17:14:13,857 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 287928895932801d51170fb202253eac with OFFLINE state 2013-07-16 17:14:13,858 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 4ac8676e6af9c1c25f2f2a90ed99d3ae with OFFLINE state 2013-07-16 17:14:13,859 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 253df35786418e184ed944fb4881aa4b with OFFLINE state 2013-07-16 17:14:13,859 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,859 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for baee7b76d51e7196ee3121edc50bda59 with OFFLINE state 2013-07-16 17:14:13,859 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,860 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 6ca2c5a98917cab87c982b4bbb7e0115 with OFFLINE state 2013-07-16 17:14:13,860 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 7dde26b51ab247338eaa8d5e372498e9 with OFFLINE state 2013-07-16 17:14:13,860 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={8ad63e6b6a48baaedae6985e87d53061 state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,861 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,861 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 38600084dc094d719e5c6033fca5452b with OFFLINE state 2013-07-16 17:14:13,861 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for d88c6958af6ef781dd9834d0369f4f70 with OFFLINE state 2013-07-16 17:14:13,861 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for ba6e592748955d732d7843b9603163dc with OFFLINE state 2013-07-16 17:14:13,862 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for f4cfa4d251af617b31eb11c76cc68678 with OFFLINE state 2013-07-16 17:14:13,862 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 7050f74c0058e5a7a912d72a5fd1f4fa with OFFLINE state 2013-07-16 17:14:13,862 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,862 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for b9cbc55dd9bcb588274e2598633563b2 with OFFLINE state 2013-07-16 17:14:13,862 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={d29efc5b487c6ba1411a330e6ea9abfc state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,862 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for f8146b196ac3399ee0b4bd5a227bd634 with OFFLINE state 2013-07-16 17:14:13,863 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 2fd443c241020be67cc0d08d473f5134 with OFFLINE state 2013-07-16 17:14:13,863 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 55d7e62280245f719c8f2cc61c586c64 with OFFLINE state 2013-07-16 17:14:13,863 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,863 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={072118ef6c0d2e55b3a9ef36a82f9fae state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,864 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={c7ae28d709ff479c3e4baad82cd99ca0 state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,864 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={930e643b6dd6efc74f14deb95249db91 state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,864 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,865 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={287928895932801d51170fb202253eac state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,865 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,867 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,867 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994853840, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,868 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,869 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 unassigned znodes=2 of total=13 2013-07-16 17:14:13,869 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={253df35786418e184ed944fb4881aa4b state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,871 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={8ad63e6b6a48baaedae6985e87d53061 state=OFFLINE, ts=1373994853836, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,871 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:13,872 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={7dde26b51ab247338eaa8d5e372498e9 state=OFFLINE, ts=1373994853841, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,873 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=1 of total=13 2013-07-16 17:14:13,873 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={38600084dc094d719e5c6033fca5452b state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,874 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,875 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,875 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,876 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={d88c6958af6ef781dd9834d0369f4f70 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,877 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={7050f74c0058e5a7a912d72a5fd1f4fa state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,877 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,878 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={b9cbc55dd9bcb588274e2598633563b2 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,879 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={f8146b196ac3399ee0b4bd5a227bd634 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,879 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 unassigned znodes=4 of total=13 2013-07-16 17:14:13,880 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,881 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,881 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={d29efc5b487c6ba1411a330e6ea9abfc state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,882 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,883 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,883 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={072118ef6c0d2e55b3a9ef36a82f9fae state=OFFLINE, ts=1373994853837, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,883 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=2 of total=13 2013-07-16 17:14:13,884 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={c7ae28d709ff479c3e4baad82cd99ca0 state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,884 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={930e643b6dd6efc74f14deb95249db91 state=OFFLINE, ts=1373994853838, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,885 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,885 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 unassigned znodes=5 of total=13 2013-07-16 17:14:13,885 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={287928895932801d51170fb202253eac state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,886 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,886 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994853840, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,887 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={253df35786418e184ed944fb4881aa4b state=OFFLINE, ts=1373994853839, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,888 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={7dde26b51ab247338eaa8d5e372498e9 state=OFFLINE, ts=1373994853841, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,888 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={38600084dc094d719e5c6033fca5452b state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,888 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=9 of total=13 2013-07-16 17:14:13,889 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,889 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,890 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={d88c6958af6ef781dd9834d0369f4f70 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,890 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 unassigned znodes=10 of total=13 2013-07-16 17:14:13,890 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={7050f74c0058e5a7a912d72a5fd1f4fa state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,891 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={b9cbc55dd9bcb588274e2598633563b2 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,891 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={f8146b196ac3399ee0b4bd5a227bd634 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:13,891 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,892 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,892 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994853843, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=13 of total=13 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {8ad63e6b6a48baaedae6985e87d53061 state=OFFLINE, ts=1373994853851, server=null} to {8ad63e6b6a48baaedae6985e87d53061 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {d29efc5b487c6ba1411a330e6ea9abfc state=OFFLINE, ts=1373994853851, server=null} to {d29efc5b487c6ba1411a330e6ea9abfc state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {072118ef6c0d2e55b3a9ef36a82f9fae state=OFFLINE, ts=1373994853852, server=null} to {072118ef6c0d2e55b3a9ef36a82f9fae state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {c7ae28d709ff479c3e4baad82cd99ca0 state=OFFLINE, ts=1373994853857, server=null} to {c7ae28d709ff479c3e4baad82cd99ca0 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,894 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {930e643b6dd6efc74f14deb95249db91 state=OFFLINE, ts=1373994853857, server=null} to {930e643b6dd6efc74f14deb95249db91 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {287928895932801d51170fb202253eac state=OFFLINE, ts=1373994853857, server=null} to {287928895932801d51170fb202253eac state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {253df35786418e184ed944fb4881aa4b state=OFFLINE, ts=1373994853859, server=null} to {253df35786418e184ed944fb4881aa4b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {7dde26b51ab247338eaa8d5e372498e9 state=OFFLINE, ts=1373994853860, server=null} to {7dde26b51ab247338eaa8d5e372498e9 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {38600084dc094d719e5c6033fca5452b state=OFFLINE, ts=1373994853861, server=null} to {38600084dc094d719e5c6033fca5452b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {d88c6958af6ef781dd9834d0369f4f70 state=OFFLINE, ts=1373994853861, server=null} to {d88c6958af6ef781dd9834d0369f4f70 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {7050f74c0058e5a7a912d72a5fd1f4fa state=OFFLINE, ts=1373994853862, server=null} to {7050f74c0058e5a7a912d72a5fd1f4fa state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 unassigned znodes=13 of total=13 2013-07-16 17:14:13,895 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {b9cbc55dd9bcb588274e2598633563b2 state=OFFLINE, ts=1373994853862, server=null} to {b9cbc55dd9bcb588274e2598633563b2 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.RegionStates(265): Transitioned from {f8146b196ac3399ee0b4bd5a227bd634 state=OFFLINE, ts=1373994853862, server=null} to {f8146b196ac3399ee0b4bd5a227bd634 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994853851, server=null} to {64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994853851, server=null} to {d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994853851, server=null} to {23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994853852, server=null} to {093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,896 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994853852, server=null} to {8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994853853, server=null} to {c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994853858, server=null} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994853859, server=null} to {baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994853860, server=null} to {6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994853861, server=null} to {ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994853862, server=null} to {f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994853863, server=null} to {2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,897 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:13,898 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994853863, server=null} to {55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,898 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.ServerManager(735): New admin connection to ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:13,901 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:13,911 ERROR [IPC Server handler 0 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:13,915 WARN [RpcServer.handler=1,port=49041] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at java.io.DataInputStream.readFully(DataInputStream.java:152) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorModtime(FSTableDescriptors.java:429) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorModtime(FSTableDescriptors.java:414) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:169) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:132) at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3458) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14390) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 14 more 2013-07-16 17:14:13,918 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:13,920 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 64c33257daeacd0fe5bf6a175319eadb from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,921 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:13,922 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:13,924 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 8ad63e6b6a48baaedae6985e87d53061 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,924 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node d3ed59de1135ee985829ee3cbad0cee2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,925 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:13,928 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d29efc5b487c6ba1411a330e6ea9abfc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,930 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:13,933 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:13,935 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 23b3aa990a7ac4e12882f9d3eca30eea from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,935 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 072118ef6c0d2e55b3a9ef36a82f9fae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,935 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 64c33257daeacd0fe5bf6a175319eadb from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,935 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:13,936 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(4192): Open {ENCODED => 64c33257daeacd0fe5bf6a175319eadb, NAME => 'test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.', STARTKEY => '', ENDKEY => 'bbb'} 2013-07-16 17:14:13,936 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:13,937 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:13,937 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:13,937 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(534): Instantiated test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:13,941 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:13,941 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node d3ed59de1135ee985829ee3cbad0cee2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,941 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(4192): Open {ENCODED => d3ed59de1135ee985829ee3cbad0cee2, NAME => 'test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.', STARTKEY => 'ggg', ENDKEY => 'hhh'} 2013-07-16 17:14:13,942 DEBUG [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=64c33257daeacd0fe5bf6a175319eadb, current state from region state map ={64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,942 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:13,942 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(534): Instantiated test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:13,947 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 8ad63e6b6a48baaedae6985e87d53061 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,947 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 8ad63e6b6a48baaedae6985e87d53061, NAME => 'test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.', STARTKEY => 'bbb', ENDKEY => 'ccc'} 2013-07-16 17:14:13,948 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:13,948 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:13,951 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:13,957 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:13,957 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d29efc5b487c6ba1411a330e6ea9abfc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,957 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => d29efc5b487c6ba1411a330e6ea9abfc, NAME => 'test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.', STARTKEY => 'kkk', ENDKEY => 'lll'} 2013-07-16 17:14:13,958 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:13,958 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:13,959 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:13,959 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:13,960 DEBUG [AM.ZK.Worker-pool-2-thread-8] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=d3ed59de1135ee985829ee3cbad0cee2, current state from region state map ={d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,960 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=8ad63e6b6a48baaedae6985e87d53061, current state from region state map ={8ad63e6b6a48baaedae6985e87d53061 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,962 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:13,964 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 23b3aa990a7ac4e12882f9d3eca30eea from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,964 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(4192): Open {ENCODED => 23b3aa990a7ac4e12882f9d3eca30eea, NAME => 'test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.', STARTKEY => 'lll', ENDKEY => 'mmm'} 2013-07-16 17:14:13,965 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:13,966 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(534): Instantiated test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:13,968 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:13,968 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d29efc5b487c6ba1411a330e6ea9abfc, current state from region state map ={d29efc5b487c6ba1411a330e6ea9abfc state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,969 DEBUG [AM.ZK.Worker-pool-2-thread-10] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=23b3aa990a7ac4e12882f9d3eca30eea, current state from region state map ={23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:13,970 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 072118ef6c0d2e55b3a9ef36a82f9fae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:13,970 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 072118ef6c0d2e55b3a9ef36a82f9fae, NAME => 'test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.', STARTKEY => 'mmm', ENDKEY => 'nnn'} 2013-07-16 17:14:13,971 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:13,972 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:13,972 INFO [StoreOpener-d3ed59de1135ee985829ee3cbad0cee2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,973 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:13,975 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=072118ef6c0d2e55b3a9ef36a82f9fae, current state from region state map ={072118ef6c0d2e55b3a9ef36a82f9fae state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:13,975 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:13,976 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:13,978 INFO [StoreOpener-64c33257daeacd0fe5bf6a175319eadb-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,978 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:13,981 INFO [StoreOpener-8ad63e6b6a48baaedae6985e87d53061-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,983 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:13,983 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:13,988 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:13,988 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:13,990 INFO [StoreOpener-64c33257daeacd0fe5bf6a175319eadb-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,990 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:13,990 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:13,993 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:13,994 INFO [RpcServer.handler=1,port=49041] regionserver.HRegionServer(3455): Open test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:13,994 INFO [StoreOpener-d29efc5b487c6ba1411a330e6ea9abfc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,998 INFO [StoreOpener-8ad63e6b6a48baaedae6985e87d53061-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:13,998 INFO [StoreOpener-d3ed59de1135ee985829ee3cbad0cee2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,001 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:14,002 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-0] master.AssignmentManager(1661): Bulk assigning done for ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,002 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(265): Transitioned from {072118ef6c0d2e55b3a9ef36a82f9fae state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {072118ef6c0d2e55b3a9ef36a82f9fae state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,003 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(265): Transitioned from {d29efc5b487c6ba1411a330e6ea9abfc state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d29efc5b487c6ba1411a330e6ea9abfc state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,003 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(265): Transitioned from {8ad63e6b6a48baaedae6985e87d53061 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {8ad63e6b6a48baaedae6985e87d53061 state=OPENING, ts=1373994854003, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,004 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(629): Onlined 64c33257daeacd0fe5bf6a175319eadb; next sequenceid=1 2013-07-16 17:14:14,005 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:14,008 INFO [RpcServer.handler=0,port=49955] regionserver.HRegionServer(3455): Open test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:14,008 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 8ad63e6b6a48baaedae6985e87d53061; next sequenceid=1 2013-07-16 17:14:14,008 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:14,013 INFO [StoreOpener-072118ef6c0d2e55b3a9ef36a82f9fae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,014 INFO [StoreOpener-23b3aa990a7ac4e12882f9d3eca30eea-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,016 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(629): Onlined d3ed59de1135ee985829ee3cbad0cee2; next sequenceid=1 2013-07-16 17:14:14,016 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:14,022 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-GeneralBulkAssigner-1] master.AssignmentManager(1661): Bulk assigning done for ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,022 INFO [AM.ZK.Worker-pool-2-thread-8] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,022 INFO [AM.ZK.Worker-pool-2-thread-6] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,022 INFO [AM.ZK.Worker-pool-2-thread-10] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,022 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] master.GeneralBulkAssigner(153): bulk assigning total 26 regions to 2 servers, took 173ms, with 26 regions still in transition 2013-07-16 17:14:14,023 INFO [StoreOpener-d29efc5b487c6ba1411a330e6ea9abfc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,023 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:14,023 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:14,023 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(2465): Bulk assigning done 2013-07-16 17:14:14,023 INFO [PostOpenDeployTasks:8ad63e6b6a48baaedae6985e87d53061] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:14,033 INFO [StoreOpener-072118ef6c0d2e55b3a9ef36a82f9fae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,039 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined d29efc5b487c6ba1411a330e6ea9abfc; next sequenceid=1 2013-07-16 17:14:14,039 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:14,039 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50904-0] lock.ZKInterProcessLockBase(328): Released /1/table-lock/test/write-master:509040000000000 2013-07-16 17:14:14,053 INFO [PostOpenDeployTasks:d29efc5b487c6ba1411a330e6ea9abfc] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:14,068 INFO [StoreOpener-23b3aa990a7ac4e12882f9d3eca30eea-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,073 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] catalog.MetaEditor(432): Updated row test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,074 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:14,074 INFO [PostOpenDeployTasks:8ad63e6b6a48baaedae6985e87d53061] catalog.MetaEditor(432): Updated row test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,075 INFO [PostOpenDeployTasks:8ad63e6b6a48baaedae6985e87d53061] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:14:14,075 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 64c33257daeacd0fe5bf6a175319eadb from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,075 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 8ad63e6b6a48baaedae6985e87d53061 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,078 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] catalog.MetaEditor(432): Updated row test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,078 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:14,079 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node d3ed59de1135ee985829ee3cbad0cee2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,080 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(629): Onlined 23b3aa990a7ac4e12882f9d3eca30eea; next sequenceid=1 2013-07-16 17:14:14,080 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:14,080 INFO [PostOpenDeployTasks:d29efc5b487c6ba1411a330e6ea9abfc] catalog.MetaEditor(432): Updated row test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,080 INFO [PostOpenDeployTasks:d29efc5b487c6ba1411a330e6ea9abfc] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:14:14,081 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d29efc5b487c6ba1411a330e6ea9abfc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,082 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 072118ef6c0d2e55b3a9ef36a82f9fae; next sequenceid=1 2013-07-16 17:14:14,082 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:14,085 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:14,086 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:14,087 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:14,087 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:14,089 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=64c33257daeacd0fe5bf6a175319eadb, current state from region state map ={64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,089 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {64c33257daeacd0fe5bf6a175319eadb state=OPEN, ts=1373994854089, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,089 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 64c33257daeacd0fe5bf6a175319eadb from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,090 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 64c33257daeacd0fe5bf6a175319eadb, NAME => 'test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.', STARTKEY => '', ENDKEY => 'bbb'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,090 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(186): Opened test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,090 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 093d3ef494905701450f33a487333200 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,090 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for 64c33257daeacd0fe5bf6a175319eadb from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,091 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 64c33257daeacd0fe5bf6a175319eadb that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,091 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=d3ed59de1135ee985829ee3cbad0cee2, current state from region state map ={d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,091 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=8ad63e6b6a48baaedae6985e87d53061, current state from region state map ={8ad63e6b6a48baaedae6985e87d53061 state=OPENING, ts=1373994854003, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,091 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {d3ed59de1135ee985829ee3cbad0cee2 state=OPEN, ts=1373994854091, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,092 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(265): Transitioned from {8ad63e6b6a48baaedae6985e87d53061 state=OPENING, ts=1373994854003, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {8ad63e6b6a48baaedae6985e87d53061 state=OPEN, ts=1373994854092, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,092 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for d3ed59de1135ee985829ee3cbad0cee2 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,092 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 8ad63e6b6a48baaedae6985e87d53061 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,093 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for 8ad63e6b6a48baaedae6985e87d53061 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,093 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for d3ed59de1135ee985829ee3cbad0cee2 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,093 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 8ad63e6b6a48baaedae6985e87d53061, NAME => 'test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.', STARTKEY => 'bbb', ENDKEY => 'ccc'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,093 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node d3ed59de1135ee985829ee3cbad0cee2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,093 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => d3ed59de1135ee985829ee3cbad0cee2, NAME => 'test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.', STARTKEY => 'ggg', ENDKEY => 'hhh'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,093 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(186): Opened test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,093 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 8ad63e6b6a48baaedae6985e87d53061 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,094 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 8316cb643e8db1f47659c2704a5d85bd from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,093 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,095 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node c7ae28d709ff479c3e4baad82cd99ca0 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,098 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,098 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d29efc5b487c6ba1411a330e6ea9abfc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,098 INFO [PostOpenDeployTasks:072118ef6c0d2e55b3a9ef36a82f9fae] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:14,098 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => d29efc5b487c6ba1411a330e6ea9abfc, NAME => 'test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.', STARTKEY => 'kkk', ENDKEY => 'lll'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,099 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,099 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,101 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 930e643b6dd6efc74f14deb95249db91 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,106 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:14,107 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] catalog.MetaEditor(432): Updated row test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,107 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:14,108 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 23b3aa990a7ac4e12882f9d3eca30eea from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,110 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,110 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,110 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 7 2013-07-16 17:14:14,110 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 7 2013-07-16 17:14:14,115 INFO [PostOpenDeployTasks:072118ef6c0d2e55b3a9ef36a82f9fae] catalog.MetaEditor(432): Updated row test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,116 INFO [PostOpenDeployTasks:072118ef6c0d2e55b3a9ef36a82f9fae] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:14:14,116 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 072118ef6c0d2e55b3a9ef36a82f9fae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,119 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node c7ae28d709ff479c3e4baad82cd99ca0 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,119 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => c7ae28d709ff479c3e4baad82cd99ca0, NAME => 'test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.', STARTKEY => 'ooo', ENDKEY => 'ppp'} 2013-07-16 17:14:14,120 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:14,121 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:14,121 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:14,122 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:14,123 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager$4(1218): The znode of test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. has been deleted, region state: {8ad63e6b6a48baaedae6985e87d53061 state=OPEN, ts=1373994854092, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,124 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(301): Onlined 8ad63e6b6a48baaedae6985e87d53061 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,124 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,125 INFO [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager$4(1223): The master has opened test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,129 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 8ad63e6b6a48baaedae6985e87d53061 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,130 INFO [StoreOpener-c7ae28d709ff479c3e4baad82cd99ca0-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,131 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 930e643b6dd6efc74f14deb95249db91 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,131 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => 930e643b6dd6efc74f14deb95249db91, NAME => 'test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.', STARTKEY => 'qqq', ENDKEY => 'rrr'} 2013-07-16 17:14:14,132 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 930e643b6dd6efc74f14deb95249db91 2013-07-16 17:14:14,132 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:14,133 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 072118ef6c0d2e55b3a9ef36a82f9fae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,134 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 072118ef6c0d2e55b3a9ef36a82f9fae, NAME => 'test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.', STARTKEY => 'mmm', ENDKEY => 'nnn'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,134 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,134 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 287928895932801d51170fb202253eac from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,139 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 093d3ef494905701450f33a487333200 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,139 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(4192): Open {ENCODED => 093d3ef494905701450f33a487333200, NAME => 'test,nnn,1373994853026.093d3ef494905701450f33a487333200.', STARTKEY => 'nnn', ENDKEY => 'ooo'} 2013-07-16 17:14:14,139 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:14,140 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 093d3ef494905701450f33a487333200 2013-07-16 17:14:14,140 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(534): Instantiated test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:14,140 INFO [StoreOpener-c7ae28d709ff479c3e4baad82cd99ca0-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,142 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 8316cb643e8db1f47659c2704a5d85bd from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,142 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(4192): Open {ENCODED => 8316cb643e8db1f47659c2704a5d85bd, NAME => 'test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.', STARTKEY => 'ppp', ENDKEY => 'qqq'} 2013-07-16 17:14:14,143 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:14,143 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 64c33257daeacd0fe5bf6a175319eadb in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,143 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(534): Instantiated test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:14,145 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1218): The znode of test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. has been deleted, region state: {64c33257daeacd0fe5bf6a175319eadb state=OPEN, ts=1373994854089, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,147 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:14,147 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region d3ed59de1135ee985829ee3cbad0cee2 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,148 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 23b3aa990a7ac4e12882f9d3eca30eea from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,148 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 23b3aa990a7ac4e12882f9d3eca30eea, NAME => 'test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.', STARTKEY => 'lll', ENDKEY => 'mmm'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,148 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(186): Opened test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,148 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 287928895932801d51170fb202253eac from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,148 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node c4611b71a935e3b170cd961ded7d0820 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,148 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 287928895932801d51170fb202253eac, NAME => 'test,sss,1373994853027.287928895932801d51170fb202253eac.', STARTKEY => 'sss', ENDKEY => 'ttt'} 2013-07-16 17:14:14,149 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 287928895932801d51170fb202253eac 2013-07-16 17:14:14,149 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:14,151 INFO [StoreOpener-930e643b6dd6efc74f14deb95249db91-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,153 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined c7ae28d709ff479c3e4baad82cd99ca0; next sequenceid=1 2013-07-16 17:14:14,154 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:14,151 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(301): Onlined 64c33257daeacd0fe5bf6a175319eadb on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,156 INFO [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1223): The master has opened test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,156 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager$4(1218): The znode of test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. has been deleted, region state: {d3ed59de1135ee985829ee3cbad0cee2 state=OPEN, ts=1373994854091, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,155 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:14,156 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(301): Onlined d3ed59de1135ee985829ee3cbad0cee2 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,157 INFO [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager$4(1223): The master has opened test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,156 DEBUG [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d29efc5b487c6ba1411a330e6ea9abfc, current state from region state map ={d29efc5b487c6ba1411a330e6ea9abfc state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,157 INFO [AM.ZK.Worker-pool-2-thread-15] master.RegionStates(265): Transitioned from {d29efc5b487c6ba1411a330e6ea9abfc state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d29efc5b487c6ba1411a330e6ea9abfc state=OPEN, ts=1373994854157, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,157 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/930e643b6dd6efc74f14deb95249db91 2013-07-16 17:14:14,158 INFO [StoreOpener-093d3ef494905701450f33a487333200-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,158 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node c4611b71a935e3b170cd961ded7d0820 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,159 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for d29efc5b487c6ba1411a330e6ea9abfc from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,159 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(4192): Open {ENCODED => c4611b71a935e3b170cd961ded7d0820, NAME => 'test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.', STARTKEY => 'rrr', ENDKEY => 'sss'} 2013-07-16 17:14:14,160 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:14,160 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(534): Instantiated test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:14,161 INFO [StoreOpener-8316cb643e8db1f47659c2704a5d85bd-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,158 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=c7ae28d709ff479c3e4baad82cd99ca0, current state from region state map ={c7ae28d709ff479c3e4baad82cd99ca0 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,158 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:14,164 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(265): Transitioned from {c7ae28d709ff479c3e4baad82cd99ca0 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {c7ae28d709ff479c3e4baad82cd99ca0 state=OPENING, ts=1373994854164, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,162 INFO [StoreOpener-930e643b6dd6efc74f14deb95249db91-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,162 INFO [PostOpenDeployTasks:c7ae28d709ff479c3e4baad82cd99ca0] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:14,159 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for d29efc5b487c6ba1411a330e6ea9abfc that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,165 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:14,167 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/287928895932801d51170fb202253eac 2013-07-16 17:14:14,167 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:14,167 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:14,169 INFO [StoreOpener-093d3ef494905701450f33a487333200-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,169 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=093d3ef494905701450f33a487333200, current state from region state map ={093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,169 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {093d3ef494905701450f33a487333200 state=OPENING, ts=1373994854169, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,172 INFO [StoreOpener-c4611b71a935e3b170cd961ded7d0820-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,175 DEBUG [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=930e643b6dd6efc74f14deb95249db91, current state from region state map ={930e643b6dd6efc74f14deb95249db91 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,175 INFO [AM.ZK.Worker-pool-2-thread-15] master.RegionStates(265): Transitioned from {930e643b6dd6efc74f14deb95249db91 state=PENDING_OPEN, ts=1373994853894, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {930e643b6dd6efc74f14deb95249db91 state=OPENING, ts=1373994854175, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,177 INFO [StoreOpener-287928895932801d51170fb202253eac-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,177 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=072118ef6c0d2e55b3a9ef36a82f9fae, current state from region state map ={072118ef6c0d2e55b3a9ef36a82f9fae state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,177 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(265): Transitioned from {072118ef6c0d2e55b3a9ef36a82f9fae state=OPENING, ts=1373994854002, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {072118ef6c0d2e55b3a9ef36a82f9fae state=OPEN, ts=1373994854177, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,177 INFO [StoreOpener-8316cb643e8db1f47659c2704a5d85bd-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,178 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined 930e643b6dd6efc74f14deb95249db91; next sequenceid=1 2013-07-16 17:14:14,179 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 930e643b6dd6efc74f14deb95249db91 2013-07-16 17:14:14,178 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 072118ef6c0d2e55b3a9ef36a82f9fae from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,180 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 072118ef6c0d2e55b3a9ef36a82f9fae that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,183 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(629): Onlined 093d3ef494905701450f33a487333200; next sequenceid=1 2013-07-16 17:14:14,183 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 093d3ef494905701450f33a487333200 2013-07-16 17:14:14,185 INFO [PostOpenDeployTasks:c7ae28d709ff479c3e4baad82cd99ca0] catalog.MetaEditor(432): Updated row test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,185 INFO [PostOpenDeployTasks:c7ae28d709ff479c3e4baad82cd99ca0] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:14:14,186 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node c7ae28d709ff479c3e4baad82cd99ca0 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,187 INFO [PostOpenDeployTasks:930e643b6dd6efc74f14deb95249db91] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:14,188 INFO [StoreOpener-c4611b71a935e3b170cd961ded7d0820-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,189 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=c4611b71a935e3b170cd961ded7d0820, current state from region state map ={c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,189 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994854189, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,190 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:14,190 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(629): Onlined 8316cb643e8db1f47659c2704a5d85bd; next sequenceid=1 2013-07-16 17:14:14,192 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:14,192 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=23b3aa990a7ac4e12882f9d3eca30eea, current state from region state map ={23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,192 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994854022, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {23b3aa990a7ac4e12882f9d3eca30eea state=OPEN, ts=1373994854192, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,192 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for 23b3aa990a7ac4e12882f9d3eca30eea from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,193 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 23b3aa990a7ac4e12882f9d3eca30eea that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,194 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=287928895932801d51170fb202253eac, current state from region state map ={287928895932801d51170fb202253eac state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,194 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(265): Transitioned from {287928895932801d51170fb202253eac state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {287928895932801d51170fb202253eac state=OPENING, ts=1373994854194, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,196 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node c7ae28d709ff479c3e4baad82cd99ca0 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,196 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => c7ae28d709ff479c3e4baad82cd99ca0, NAME => 'test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.', STARTKEY => 'ooo', ENDKEY => 'ppp'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,196 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,197 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 253df35786418e184ed944fb4881aa4b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,197 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:14,198 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=8316cb643e8db1f47659c2704a5d85bd, current state from region state map ={8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,198 INFO [StoreOpener-287928895932801d51170fb202253eac-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,198 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994854198, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,203 INFO [PostOpenDeployTasks:930e643b6dd6efc74f14deb95249db91] catalog.MetaEditor(432): Updated row test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,203 INFO [PostOpenDeployTasks:930e643b6dd6efc74f14deb95249db91] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:14:14,204 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 930e643b6dd6efc74f14deb95249db91 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,207 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 253df35786418e184ed944fb4881aa4b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,207 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 253df35786418e184ed944fb4881aa4b, NAME => 'test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.', STARTKEY => 'vvv', ENDKEY => 'www'} 2013-07-16 17:14:14,207 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] catalog.MetaEditor(432): Updated row test,nnn,1373994853026.093d3ef494905701450f33a487333200. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,207 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:14,208 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 253df35786418e184ed944fb4881aa4b 2013-07-16 17:14:14,208 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:14,208 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:14,208 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1218): The znode of test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. has been deleted, region state: {d29efc5b487c6ba1411a330e6ea9abfc state=OPEN, ts=1373994854157, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,208 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(301): Onlined d29efc5b487c6ba1411a330e6ea9abfc on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,208 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,208 INFO [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1223): The master has opened test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,211 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 287928895932801d51170fb202253eac; next sequenceid=1 2013-07-16 17:14:14,211 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 287928895932801d51170fb202253eac 2013-07-16 17:14:14,211 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 093d3ef494905701450f33a487333200 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,212 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region d29efc5b487c6ba1411a330e6ea9abfc in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,216 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:14,219 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 093d3ef494905701450f33a487333200 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,219 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 093d3ef494905701450f33a487333200, NAME => 'test,nnn,1373994853026.093d3ef494905701450f33a487333200.', STARTKEY => 'nnn', ENDKEY => 'ooo'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,219 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(186): Opened test,nnn,1373994853026.093d3ef494905701450f33a487333200. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,220 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 4ac8676e6af9c1c25f2f2a90ed99d3ae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,219 INFO [StoreOpener-253df35786418e184ed944fb4881aa4b-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,222 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] catalog.MetaEditor(432): Updated row test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,222 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:14,222 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 930e643b6dd6efc74f14deb95249db91 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,222 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 8316cb643e8db1f47659c2704a5d85bd from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,222 INFO [PostOpenDeployTasks:287928895932801d51170fb202253eac] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:14,222 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 930e643b6dd6efc74f14deb95249db91, NAME => 'test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.', STARTKEY => 'qqq', ENDKEY => 'rrr'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,223 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,223 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 7dde26b51ab247338eaa8d5e372498e9 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,231 INFO [StoreOpener-253df35786418e184ed944fb4881aa4b-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,235 INFO [PostOpenDeployTasks:287928895932801d51170fb202253eac] catalog.MetaEditor(432): Updated row test,sss,1373994853027.287928895932801d51170fb202253eac. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,235 INFO [PostOpenDeployTasks:287928895932801d51170fb202253eac] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:14:14,236 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 287928895932801d51170fb202253eac from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,236 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/253df35786418e184ed944fb4881aa4b 2013-07-16 17:14:14,237 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 7dde26b51ab247338eaa8d5e372498e9 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,237 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => 7dde26b51ab247338eaa8d5e372498e9, NAME => 'test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.', STARTKEY => 'xxx', ENDKEY => 'yyy'} 2013-07-16 17:14:14,238 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 7dde26b51ab247338eaa8d5e372498e9 2013-07-16 17:14:14,239 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:14,240 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(629): Onlined c4611b71a935e3b170cd961ded7d0820; next sequenceid=1 2013-07-16 17:14:14,240 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:14,242 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:14,244 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:14,244 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1218): The znode of test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. has been deleted, region state: {072118ef6c0d2e55b3a9ef36a82f9fae state=OPEN, ts=1373994854177, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,244 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 4ac8676e6af9c1c25f2f2a90ed99d3ae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,245 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(301): Onlined 072118ef6c0d2e55b3a9ef36a82f9fae on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,245 INFO [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1223): The master has opened test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,245 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 072118ef6c0d2e55b3a9ef36a82f9fae in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,245 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(4192): Open {ENCODED => 4ac8676e6af9c1c25f2f2a90ed99d3ae, NAME => 'test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.', STARTKEY => 'ttt', ENDKEY => 'uuu'} 2013-07-16 17:14:14,246 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:14,246 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:14,246 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(534): Instantiated test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:14,246 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/930e643b6dd6efc74f14deb95249db91 2013-07-16 17:14:14,247 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 253df35786418e184ed944fb4881aa4b; next sequenceid=1 2013-07-16 17:14:14,247 DEBUG [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=c7ae28d709ff479c3e4baad82cd99ca0, current state from region state map ={c7ae28d709ff479c3e4baad82cd99ca0 state=OPENING, ts=1373994854164, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,247 INFO [AM.ZK.Worker-pool-2-thread-6] master.RegionStates(265): Transitioned from {c7ae28d709ff479c3e4baad82cd99ca0 state=OPENING, ts=1373994854164, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {c7ae28d709ff479c3e4baad82cd99ca0 state=OPEN, ts=1373994854247, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,248 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for c7ae28d709ff479c3e4baad82cd99ca0 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,248 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for c7ae28d709ff479c3e4baad82cd99ca0 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,248 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/7dde26b51ab247338eaa8d5e372498e9 2013-07-16 17:14:14,248 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:14,249 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 8316cb643e8db1f47659c2704a5d85bd from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,250 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 8316cb643e8db1f47659c2704a5d85bd, NAME => 'test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.', STARTKEY => 'ppp', ENDKEY => 'qqq'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,250 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(186): Opened test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,250 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node baee7b76d51e7196ee3121edc50bda59 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,247 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 253df35786418e184ed944fb4881aa4b 2013-07-16 17:14:14,252 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:14,253 INFO [StoreOpener-7dde26b51ab247338eaa8d5e372498e9-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,255 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 23b3aa990a7ac4e12882f9d3eca30eea in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,256 DEBUG [AM.ZK.Worker-pool-2-thread-10] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=253df35786418e184ed944fb4881aa4b, current state from region state map ={253df35786418e184ed944fb4881aa4b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,256 INFO [AM.ZK.Worker-pool-2-thread-10] master.RegionStates(265): Transitioned from {253df35786418e184ed944fb4881aa4b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {253df35786418e184ed944fb4881aa4b state=OPENING, ts=1373994854256, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,256 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=093d3ef494905701450f33a487333200, current state from region state map ={093d3ef494905701450f33a487333200 state=OPENING, ts=1373994854169, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,257 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=OPENING, ts=1373994854169, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {093d3ef494905701450f33a487333200 state=OPEN, ts=1373994854257, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,257 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=930e643b6dd6efc74f14deb95249db91, current state from region state map ={930e643b6dd6efc74f14deb95249db91 state=OPENING, ts=1373994854175, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,263 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,263 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:14,264 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:14,264 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(265): Transitioned from {930e643b6dd6efc74f14deb95249db91 state=OPENING, ts=1373994854175, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {930e643b6dd6efc74f14deb95249db91 state=OPEN, ts=1373994854257, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,264 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for 930e643b6dd6efc74f14deb95249db91 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,264 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 930e643b6dd6efc74f14deb95249db91 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,265 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 093d3ef494905701450f33a487333200 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,265 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager$4(1218): The znode of test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. has been deleted, region state: {23b3aa990a7ac4e12882f9d3eca30eea state=OPEN, ts=1373994854192, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,265 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(301): Onlined 23b3aa990a7ac4e12882f9d3eca30eea on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,265 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 093d3ef494905701450f33a487333200 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,265 INFO [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager$4(1223): The master has opened test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,266 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:14,266 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node baee7b76d51e7196ee3121edc50bda59 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,267 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(4192): Open {ENCODED => baee7b76d51e7196ee3121edc50bda59, NAME => 'test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.', STARTKEY => 'www', ENDKEY => 'xxx'} 2013-07-16 17:14:14,267 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:14,268 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(534): Instantiated test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:14,268 INFO [PostOpenDeployTasks:253df35786418e184ed944fb4881aa4b] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:14,270 INFO [StoreOpener-4ac8676e6af9c1c25f2f2a90ed99d3ae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,269 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 287928895932801d51170fb202253eac from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,271 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 287928895932801d51170fb202253eac, NAME => 'test,sss,1373994853027.287928895932801d51170fb202253eac.', STARTKEY => 'sss', ENDKEY => 'ttt'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,271 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,sss,1373994853027.287928895932801d51170fb202253eac. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,271 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 38600084dc094d719e5c6033fca5452b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,270 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=7dde26b51ab247338eaa8d5e372498e9, current state from region state map ={7dde26b51ab247338eaa8d5e372498e9 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,271 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(265): Transitioned from {7dde26b51ab247338eaa8d5e372498e9 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {7dde26b51ab247338eaa8d5e372498e9 state=OPENING, ts=1373994854271, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,273 INFO [StoreOpener-7dde26b51ab247338eaa8d5e372498e9-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,274 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/287928895932801d51170fb202253eac 2013-07-16 17:14:14,279 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=4ac8676e6af9c1c25f2f2a90ed99d3ae, current state from region state map ={4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,280 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994854280, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,280 INFO [StoreOpener-baee7b76d51e7196ee3121edc50bda59-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,281 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 38600084dc094d719e5c6033fca5452b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,281 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=8316cb643e8db1f47659c2704a5d85bd, current state from region state map ={8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994854198, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,280 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined 7dde26b51ab247338eaa8d5e372498e9; next sequenceid=1 2013-07-16 17:14:14,282 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994854198, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {8316cb643e8db1f47659c2704a5d85bd state=OPEN, ts=1373994854282, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,282 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 38600084dc094d719e5c6033fca5452b, NAME => 'test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.', STARTKEY => 'zzz', ENDKEY => ''} 2013-07-16 17:14:14,282 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 8316cb643e8db1f47659c2704a5d85bd from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,282 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 8316cb643e8db1f47659c2704a5d85bd that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,282 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] catalog.MetaEditor(432): Updated row test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,283 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:14,283 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 38600084dc094d719e5c6033fca5452b 2013-07-16 17:14:14,282 INFO [StoreOpener-4ac8676e6af9c1c25f2f2a90ed99d3ae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,284 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node c4611b71a935e3b170cd961ded7d0820 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,283 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:14,282 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 7dde26b51ab247338eaa8d5e372498e9 2013-07-16 17:14:14,290 INFO [StoreOpener-baee7b76d51e7196ee3121edc50bda59-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,291 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(629): Onlined 4ac8676e6af9c1c25f2f2a90ed99d3ae; next sequenceid=1 2013-07-16 17:14:14,292 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:14,292 INFO [PostOpenDeployTasks:253df35786418e184ed944fb4881aa4b] catalog.MetaEditor(432): Updated row test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,292 INFO [PostOpenDeployTasks:253df35786418e184ed944fb4881aa4b] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:14:14,293 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 253df35786418e184ed944fb4881aa4b from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,294 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=baee7b76d51e7196ee3121edc50bda59, current state from region state map ={baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,294 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994854294, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,295 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:14,295 DEBUG [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager$4(1218): The znode of test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. has been deleted, region state: {c7ae28d709ff479c3e4baad82cd99ca0 state=OPEN, ts=1373994854247, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,295 INFO [AM.ZK.Worker-pool-2-thread-15] master.RegionStates(301): Onlined c7ae28d709ff479c3e4baad82cd99ca0 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,295 INFO [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager$4(1223): The master has opened test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,296 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:14,296 INFO [PostOpenDeployTasks:7dde26b51ab247338eaa8d5e372498e9] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:14,297 INFO [StoreOpener-38600084dc094d719e5c6033fca5452b-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,297 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,297 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(629): Onlined baee7b76d51e7196ee3121edc50bda59; next sequenceid=1 2013-07-16 17:14:14,298 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:14,299 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region c7ae28d709ff479c3e4baad82cd99ca0 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node c4611b71a935e3b170cd961ded7d0820 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => c4611b71a935e3b170cd961ded7d0820, NAME => 'test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.', STARTKEY => 'rrr', ENDKEY => 'sss'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(186): Opened test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 6ca2c5a98917cab87c982b4bbb7e0115 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,303 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 253df35786418e184ed944fb4881aa4b from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,303 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 253df35786418e184ed944fb4881aa4b, NAME => 'test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.', STARTKEY => 'vvv', ENDKEY => 'www'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,303 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,303 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d88c6958af6ef781dd9834d0369f4f70 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,303 INFO [StoreOpener-38600084dc094d719e5c6033fca5452b-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,307 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=287928895932801d51170fb202253eac, current state from region state map ={287928895932801d51170fb202253eac state=OPENING, ts=1373994854194, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,307 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {287928895932801d51170fb202253eac state=OPENING, ts=1373994854194, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {287928895932801d51170fb202253eac state=OPEN, ts=1373994854307, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,307 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/38600084dc094d719e5c6033fca5452b 2013-07-16 17:14:14,307 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for 287928895932801d51170fb202253eac from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,307 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 287928895932801d51170fb202253eac that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,309 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 38600084dc094d719e5c6033fca5452b; next sequenceid=1 2013-07-16 17:14:14,309 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 38600084dc094d719e5c6033fca5452b 2013-07-16 17:14:14,312 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/930e643b6dd6efc74f14deb95249db91 2013-07-16 17:14:14,312 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager$4(1218): The znode of test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. has been deleted, region state: {930e643b6dd6efc74f14deb95249db91 state=OPEN, ts=1373994854257, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,313 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(301): Onlined 930e643b6dd6efc74f14deb95249db91 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,313 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 930e643b6dd6efc74f14deb95249db91 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,313 INFO [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager$4(1223): The master has opened test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,313 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:14,313 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1218): The znode of test,nnn,1373994853026.093d3ef494905701450f33a487333200. has been deleted, region state: {093d3ef494905701450f33a487333200 state=OPEN, ts=1373994854257, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,314 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(301): Onlined 093d3ef494905701450f33a487333200 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,314 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 093d3ef494905701450f33a487333200 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,314 INFO [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1223): The master has opened test,nnn,1373994853026.093d3ef494905701450f33a487333200. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,314 INFO [PostOpenDeployTasks:7dde26b51ab247338eaa8d5e372498e9] catalog.MetaEditor(432): Updated row test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,314 INFO [PostOpenDeployTasks:7dde26b51ab247338eaa8d5e372498e9] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:14:14,314 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:14,315 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 7dde26b51ab247338eaa8d5e372498e9 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,315 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/253df35786418e184ed944fb4881aa4b 2013-07-16 17:14:14,315 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d88c6958af6ef781dd9834d0369f4f70 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,316 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => d88c6958af6ef781dd9834d0369f4f70, NAME => 'test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.', STARTKEY => 'ddd', ENDKEY => 'eee'} 2013-07-16 17:14:14,316 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:14,317 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:14,317 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:14,319 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:14,322 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=38600084dc094d719e5c6033fca5452b, current state from region state map ={38600084dc094d719e5c6033fca5452b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,322 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(265): Transitioned from {38600084dc094d719e5c6033fca5452b state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {38600084dc094d719e5c6033fca5452b state=OPENING, ts=1373994854322, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,323 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:14,323 INFO [PostOpenDeployTasks:38600084dc094d719e5c6033fca5452b] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:14,323 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1218): The znode of test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. has been deleted, region state: {8316cb643e8db1f47659c2704a5d85bd state=OPEN, ts=1373994854282, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,323 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(301): Onlined 8316cb643e8db1f47659c2704a5d85bd on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,323 INFO [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1223): The master has opened test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,325 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,325 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 8316cb643e8db1f47659c2704a5d85bd in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,326 DEBUG [AM.ZK.Worker-pool-2-thread-8] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=253df35786418e184ed944fb4881aa4b, current state from region state map ={253df35786418e184ed944fb4881aa4b state=OPENING, ts=1373994854256, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,326 INFO [AM.ZK.Worker-pool-2-thread-8] master.RegionStates(265): Transitioned from {253df35786418e184ed944fb4881aa4b state=OPENING, ts=1373994854256, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {253df35786418e184ed944fb4881aa4b state=OPEN, ts=1373994854326, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,326 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for 253df35786418e184ed944fb4881aa4b from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,327 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 253df35786418e184ed944fb4881aa4b that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,327 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] catalog.MetaEditor(432): Updated row test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,327 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:14,327 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=c4611b71a935e3b170cd961ded7d0820, current state from region state map ={c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994854189, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,328 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 4ac8676e6af9c1c25f2f2a90ed99d3ae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,328 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994854189, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {c4611b71a935e3b170cd961ded7d0820 state=OPEN, ts=1373994854328, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,328 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for c4611b71a935e3b170cd961ded7d0820 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,328 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for c4611b71a935e3b170cd961ded7d0820 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,329 INFO [StoreOpener-d88c6958af6ef781dd9834d0369f4f70-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,329 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 6ca2c5a98917cab87c982b4bbb7e0115 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,330 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(4192): Open {ENCODED => 6ca2c5a98917cab87c982b4bbb7e0115, NAME => 'test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.', STARTKEY => 'yyy', ENDKEY => 'zzz'} 2013-07-16 17:14:14,330 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] catalog.MetaEditor(432): Updated row test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,330 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:14,330 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:14,331 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(534): Instantiated test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:14,331 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d88c6958af6ef781dd9834d0369f4f70, current state from region state map ={d88c6958af6ef781dd9834d0369f4f70 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,332 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node baee7b76d51e7196ee3121edc50bda59 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,332 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(265): Transitioned from {d88c6958af6ef781dd9834d0369f4f70 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d88c6958af6ef781dd9834d0369f4f70 state=OPENING, ts=1373994854332, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,335 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 7dde26b51ab247338eaa8d5e372498e9 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,335 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 7dde26b51ab247338eaa8d5e372498e9, NAME => 'test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.', STARTKEY => 'xxx', ENDKEY => 'yyy'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,335 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,335 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 7050f74c0058e5a7a912d72a5fd1f4fa from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,336 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:14,337 INFO [StoreOpener-d88c6958af6ef781dd9834d0369f4f70-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,339 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/287928895932801d51170fb202253eac 2013-07-16 17:14:14,340 DEBUG [AM.ZK.Worker-pool-2-thread-10] master.AssignmentManager$4(1218): The znode of test,sss,1373994853027.287928895932801d51170fb202253eac. has been deleted, region state: {287928895932801d51170fb202253eac state=OPEN, ts=1373994854307, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,340 INFO [AM.ZK.Worker-pool-2-thread-10] master.RegionStates(301): Onlined 287928895932801d51170fb202253eac on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,340 INFO [AM.ZK.Worker-pool-2-thread-10] master.AssignmentManager$4(1223): The master has opened test,sss,1373994853027.287928895932801d51170fb202253eac. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,340 INFO [StoreOpener-6ca2c5a98917cab87c982b4bbb7e0115-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,341 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 287928895932801d51170fb202253eac in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,342 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/7dde26b51ab247338eaa8d5e372498e9 2013-07-16 17:14:14,342 INFO [PostOpenDeployTasks:38600084dc094d719e5c6033fca5452b] catalog.MetaEditor(432): Updated row test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,342 INFO [PostOpenDeployTasks:38600084dc094d719e5c6033fca5452b] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:14:14,343 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 38600084dc094d719e5c6033fca5452b from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,343 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:14,345 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 7050f74c0058e5a7a912d72a5fd1f4fa from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,345 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => 7050f74c0058e5a7a912d72a5fd1f4fa, NAME => 'test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.', STARTKEY => 'fff', ENDKEY => 'ggg'} 2013-07-16 17:14:14,346 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:14,346 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 4ac8676e6af9c1c25f2f2a90ed99d3ae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,346 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:14,346 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 4ac8676e6af9c1c25f2f2a90ed99d3ae, NAME => 'test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.', STARTKEY => 'ttt', ENDKEY => 'uuu'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,347 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(186): Opened test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,347 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node ba6e592748955d732d7843b9603163dc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,347 INFO [StoreOpener-6ca2c5a98917cab87c982b4bbb7e0115-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,348 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined d88c6958af6ef781dd9834d0369f4f70; next sequenceid=1 2013-07-16 17:14:14,348 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:14,350 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 38600084dc094d719e5c6033fca5452b from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,350 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 38600084dc094d719e5c6033fca5452b, NAME => 'test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.', STARTKEY => 'zzz', ENDKEY => ''}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,350 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,350 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node b9cbc55dd9bcb588274e2598633563b2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,352 INFO [PostOpenDeployTasks:d88c6958af6ef781dd9834d0369f4f70] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:14,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node baee7b76d51e7196ee3121edc50bda59 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => baee7b76d51e7196ee3121edc50bda59, NAME => 'test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.', STARTKEY => 'www', ENDKEY => 'xxx'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(186): Opened test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node f4cfa4d251af617b31eb11c76cc68678 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,354 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(629): Onlined 6ca2c5a98917cab87c982b4bbb7e0115; next sequenceid=1 2013-07-16 17:14:14,354 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:14,354 INFO [StoreOpener-7050f74c0058e5a7a912d72a5fd1f4fa-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,357 DEBUG [AM.ZK.Worker-pool-2-thread-18] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=6ca2c5a98917cab87c982b4bbb7e0115, current state from region state map ={6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,357 INFO [AM.ZK.Worker-pool-2-thread-18] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994854357, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,358 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:14,359 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:14,360 INFO [StoreOpener-7050f74c0058e5a7a912d72a5fd1f4fa-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,361 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/38600084dc094d719e5c6033fca5452b 2013-07-16 17:14:14,360 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=7dde26b51ab247338eaa8d5e372498e9, current state from region state map ={7dde26b51ab247338eaa8d5e372498e9 state=OPENING, ts=1373994854271, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,361 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(265): Transitioned from {7dde26b51ab247338eaa8d5e372498e9 state=OPENING, ts=1373994854271, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {7dde26b51ab247338eaa8d5e372498e9 state=OPEN, ts=1373994854361, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,361 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 7dde26b51ab247338eaa8d5e372498e9 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,361 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 7dde26b51ab247338eaa8d5e372498e9 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,362 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=4ac8676e6af9c1c25f2f2a90ed99d3ae, current state from region state map ={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994854280, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,362 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994854280, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPEN, ts=1373994854362, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,362 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 4ac8676e6af9c1c25f2f2a90ed99d3ae from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,362 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 4ac8676e6af9c1c25f2f2a90ed99d3ae that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,364 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:14,364 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/253df35786418e184ed944fb4881aa4b 2013-07-16 17:14:14,364 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,364 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager$4(1218): The znode of test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. has been deleted, region state: {253df35786418e184ed944fb4881aa4b state=OPEN, ts=1373994854326, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,364 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(301): Onlined 253df35786418e184ed944fb4881aa4b on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,365 INFO [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager$4(1223): The master has opened test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,366 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 253df35786418e184ed944fb4881aa4b in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,366 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined 7050f74c0058e5a7a912d72a5fd1f4fa; next sequenceid=1 2013-07-16 17:14:14,366 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:14,367 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:14,367 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager$4(1218): The znode of test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. has been deleted, region state: {c4611b71a935e3b170cd961ded7d0820 state=OPEN, ts=1373994854328, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,367 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(301): Onlined c4611b71a935e3b170cd961ded7d0820 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,367 INFO [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager$4(1223): The master has opened test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,367 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node b9cbc55dd9bcb588274e2598633563b2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,368 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => b9cbc55dd9bcb588274e2598633563b2, NAME => 'test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.', STARTKEY => 'eee', ENDKEY => 'fff'} 2013-07-16 17:14:14,368 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:14,369 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:14,369 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region c4611b71a935e3b170cd961ded7d0820 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,371 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node ba6e592748955d732d7843b9603163dc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,371 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(4192): Open {ENCODED => ba6e592748955d732d7843b9603163dc, NAME => 'test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.', STARTKEY => 'jjj', ENDKEY => 'kkk'} 2013-07-16 17:14:14,372 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:14,372 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:14,372 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=7050f74c0058e5a7a912d72a5fd1f4fa, current state from region state map ={7050f74c0058e5a7a912d72a5fd1f4fa state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,372 INFO [PostOpenDeployTasks:d88c6958af6ef781dd9834d0369f4f70] catalog.MetaEditor(432): Updated row test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,372 INFO [PostOpenDeployTasks:d88c6958af6ef781dd9834d0369f4f70] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:14:14,372 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(265): Transitioned from {7050f74c0058e5a7a912d72a5fd1f4fa state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {7050f74c0058e5a7a912d72a5fd1f4fa state=OPENING, ts=1373994854372, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,372 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(534): Instantiated test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:14,378 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d88c6958af6ef781dd9834d0369f4f70 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,378 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node f4cfa4d251af617b31eb11c76cc68678 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,379 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(4192): Open {ENCODED => f4cfa4d251af617b31eb11c76cc68678, NAME => 'test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.', STARTKEY => 'ccc', ENDKEY => 'ddd'} 2013-07-16 17:14:14,379 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:14,377 INFO [PostOpenDeployTasks:7050f74c0058e5a7a912d72a5fd1f4fa] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:14,380 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(534): Instantiated test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:14,381 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:14,383 INFO [StoreOpener-b9cbc55dd9bcb588274e2598633563b2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,386 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d88c6958af6ef781dd9834d0369f4f70 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,386 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => d88c6958af6ef781dd9834d0369f4f70, NAME => 'test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.', STARTKEY => 'ddd', ENDKEY => 'eee'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,386 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,386 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node f8146b196ac3399ee0b4bd5a227bd634 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,387 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=baee7b76d51e7196ee3121edc50bda59, current state from region state map ={baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994854294, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,387 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994854294, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {baee7b76d51e7196ee3121edc50bda59 state=OPEN, ts=1373994854387, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,387 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for baee7b76d51e7196ee3121edc50bda59 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,387 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for baee7b76d51e7196ee3121edc50bda59 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,390 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=38600084dc094d719e5c6033fca5452b, current state from region state map ={38600084dc094d719e5c6033fca5452b state=OPENING, ts=1373994854322, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,390 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(265): Transitioned from {38600084dc094d719e5c6033fca5452b state=OPENING, ts=1373994854322, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {38600084dc094d719e5c6033fca5452b state=OPEN, ts=1373994854390, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,390 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for 38600084dc094d719e5c6033fca5452b from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,390 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 38600084dc094d719e5c6033fca5452b that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,395 INFO [StoreOpener-ba6e592748955d732d7843b9603163dc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,398 INFO [StoreOpener-b9cbc55dd9bcb588274e2598633563b2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,398 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node f8146b196ac3399ee0b4bd5a227bd634 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,399 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => f8146b196ac3399ee0b4bd5a227bd634, NAME => 'test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.', STARTKEY => 'uuu', ENDKEY => 'vvv'} 2013-07-16 17:14:14,399 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test f8146b196ac3399ee0b4bd5a227bd634 2013-07-16 17:14:14,400 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:14,400 INFO [PostOpenDeployTasks:7050f74c0058e5a7a912d72a5fd1f4fa] catalog.MetaEditor(432): Updated row test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,400 INFO [PostOpenDeployTasks:7050f74c0058e5a7a912d72a5fd1f4fa] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:14:14,401 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 7050f74c0058e5a7a912d72a5fd1f4fa from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,403 INFO [StoreOpener-f4cfa4d251af617b31eb11c76cc68678-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,406 INFO [StoreOpener-ba6e592748955d732d7843b9603163dc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,406 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] catalog.MetaEditor(432): Updated row test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,406 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:14,411 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 6ca2c5a98917cab87c982b4bbb7e0115 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,415 INFO [StoreOpener-f8146b196ac3399ee0b4bd5a227bd634-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,416 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 7050f74c0058e5a7a912d72a5fd1f4fa from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,417 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 7050f74c0058e5a7a912d72a5fd1f4fa, NAME => 'test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.', STARTKEY => 'fff', ENDKEY => 'ggg'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,417 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,417 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(629): Onlined ba6e592748955d732d7843b9603163dc; next sequenceid=1 2013-07-16 17:14:14,418 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:14,418 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined b9cbc55dd9bcb588274e2598633563b2; next sequenceid=1 2013-07-16 17:14:14,418 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:14,421 INFO [StoreOpener-f4cfa4d251af617b31eb11c76cc68678-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,422 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:14,422 DEBUG [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=b9cbc55dd9bcb588274e2598633563b2, current state from region state map ={b9cbc55dd9bcb588274e2598633563b2 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,422 INFO [AM.ZK.Worker-pool-2-thread-2] master.RegionStates(265): Transitioned from {b9cbc55dd9bcb588274e2598633563b2 state=PENDING_OPEN, ts=1373994853895, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {b9cbc55dd9bcb588274e2598633563b2 state=OPENING, ts=1373994854422, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,423 INFO [StoreOpener-f8146b196ac3399ee0b4bd5a227bd634-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,424 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 6ca2c5a98917cab87c982b4bbb7e0115 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,424 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 6ca2c5a98917cab87c982b4bbb7e0115, NAME => 'test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.', STARTKEY => 'yyy', ENDKEY => 'zzz'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,424 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(186): Opened test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,424 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 2fd443c241020be67cc0d08d473f5134 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,425 INFO [PostOpenDeployTasks:b9cbc55dd9bcb588274e2598633563b2] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:14,427 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(629): Onlined f4cfa4d251af617b31eb11c76cc68678; next sequenceid=1 2013-07-16 17:14:14,427 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:14,427 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:14,429 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined f8146b196ac3399ee0b4bd5a227bd634; next sequenceid=1 2013-07-16 17:14:14,429 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node f8146b196ac3399ee0b4bd5a227bd634 2013-07-16 17:14:14,429 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:14,430 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=ba6e592748955d732d7843b9603163dc, current state from region state map ={ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,431 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994854430, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,431 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f8146b196ac3399ee0b4bd5a227bd634 2013-07-16 17:14:14,435 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/7dde26b51ab247338eaa8d5e372498e9 2013-07-16 17:14:14,436 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,436 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1218): The znode of test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. has been deleted, region state: {7dde26b51ab247338eaa8d5e372498e9 state=OPEN, ts=1373994854361, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,436 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(301): Onlined 7dde26b51ab247338eaa8d5e372498e9 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,436 INFO [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1223): The master has opened test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,437 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 7dde26b51ab247338eaa8d5e372498e9 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,438 INFO [PostOpenDeployTasks:b9cbc55dd9bcb588274e2598633563b2] catalog.MetaEditor(432): Updated row test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,438 INFO [PostOpenDeployTasks:b9cbc55dd9bcb588274e2598633563b2] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:14:14,438 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node b9cbc55dd9bcb588274e2598633563b2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,440 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:14,440 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:14,440 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1218): The znode of test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. has been deleted, region state: {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPEN, ts=1373994854362, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,440 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(301): Onlined 4ac8676e6af9c1c25f2f2a90ed99d3ae on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,440 INFO [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1223): The master has opened test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,441 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 4ac8676e6af9c1c25f2f2a90ed99d3ae in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,447 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:14,447 INFO [PostOpenDeployTasks:f8146b196ac3399ee0b4bd5a227bd634] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:14,447 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:14,448 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] catalog.MetaEditor(432): Updated row test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,448 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:14,448 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node ba6e592748955d732d7843b9603163dc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,449 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=f4cfa4d251af617b31eb11c76cc68678, current state from region state map ={f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,449 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994854449, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,450 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 2fd443c241020be67cc0d08d473f5134 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,450 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(4192): Open {ENCODED => 2fd443c241020be67cc0d08d473f5134, NAME => 'test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.', STARTKEY => 'hhh', ENDKEY => 'iii'} 2013-07-16 17:14:14,451 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:14,451 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(534): Instantiated test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:14,454 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node b9cbc55dd9bcb588274e2598633563b2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,454 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => b9cbc55dd9bcb588274e2598633563b2, NAME => 'test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.', STARTKEY => 'eee', ENDKEY => 'fff'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,455 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,457 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d88c6958af6ef781dd9834d0369f4f70, current state from region state map ={d88c6958af6ef781dd9834d0369f4f70 state=OPENING, ts=1373994854332, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,457 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(265): Transitioned from {d88c6958af6ef781dd9834d0369f4f70 state=OPENING, ts=1373994854332, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d88c6958af6ef781dd9834d0369f4f70 state=OPEN, ts=1373994854457, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,457 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for d88c6958af6ef781dd9834d0369f4f70 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,458 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for d88c6958af6ef781dd9834d0369f4f70 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,461 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node ba6e592748955d732d7843b9603163dc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,461 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => ba6e592748955d732d7843b9603163dc, NAME => 'test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.', STARTKEY => 'jjj', ENDKEY => 'kkk'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,461 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=f8146b196ac3399ee0b4bd5a227bd634, current state from region state map ={f8146b196ac3399ee0b4bd5a227bd634 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,461 INFO [StoreOpener-2fd443c241020be67cc0d08d473f5134-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,462 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(265): Transitioned from {f8146b196ac3399ee0b4bd5a227bd634 state=PENDING_OPEN, ts=1373994853896, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {f8146b196ac3399ee0b4bd5a227bd634 state=OPENING, ts=1373994854461, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,461 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(186): Opened test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,462 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 55d7e62280245f719c8f2cc61c586c64 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,463 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:14,463 DEBUG [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager$4(1218): The znode of test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. has been deleted, region state: {baee7b76d51e7196ee3121edc50bda59 state=OPEN, ts=1373994854387, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,463 INFO [AM.ZK.Worker-pool-2-thread-6] master.RegionStates(301): Onlined baee7b76d51e7196ee3121edc50bda59 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,463 INFO [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager$4(1223): The master has opened test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,464 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region baee7b76d51e7196ee3121edc50bda59 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,466 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/38600084dc094d719e5c6033fca5452b 2013-07-16 17:14:14,466 DEBUG [AM.ZK.Worker-pool-2-thread-18] master.AssignmentManager$4(1218): The znode of test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. has been deleted, region state: {38600084dc094d719e5c6033fca5452b state=OPEN, ts=1373994854390, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,466 INFO [AM.ZK.Worker-pool-2-thread-18] master.RegionStates(301): Onlined 38600084dc094d719e5c6033fca5452b on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,466 INFO [AM.ZK.Worker-pool-2-thread-18] master.AssignmentManager$4(1223): The master has opened test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,467 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 38600084dc094d719e5c6033fca5452b in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,467 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 55d7e62280245f719c8f2cc61c586c64 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:14,467 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(4192): Open {ENCODED => 55d7e62280245f719c8f2cc61c586c64, NAME => 'test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.', STARTKEY => 'iii', ENDKEY => 'jjj'} 2013-07-16 17:14:14,468 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:14,468 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(534): Instantiated test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:14,469 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] catalog.MetaEditor(432): Updated row test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,469 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:14,470 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node f4cfa4d251af617b31eb11c76cc68678 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,470 INFO [StoreOpener-2fd443c241020be67cc0d08d473f5134-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,471 INFO [PostOpenDeployTasks:f8146b196ac3399ee0b4bd5a227bd634] catalog.MetaEditor(432): Updated row test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,471 INFO [PostOpenDeployTasks:f8146b196ac3399ee0b4bd5a227bd634] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:14:14,471 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node f8146b196ac3399ee0b4bd5a227bd634 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,473 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:14,475 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=7050f74c0058e5a7a912d72a5fd1f4fa, current state from region state map ={7050f74c0058e5a7a912d72a5fd1f4fa state=OPENING, ts=1373994854372, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,475 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(265): Transitioned from {7050f74c0058e5a7a912d72a5fd1f4fa state=OPENING, ts=1373994854372, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {7050f74c0058e5a7a912d72a5fd1f4fa state=OPEN, ts=1373994854475, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,475 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 7050f74c0058e5a7a912d72a5fd1f4fa from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,475 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 7050f74c0058e5a7a912d72a5fd1f4fa that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,476 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:14,477 DEBUG [AM.ZK.Worker-pool-2-thread-10] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=6ca2c5a98917cab87c982b4bbb7e0115, current state from region state map ={6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994854357, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,477 INFO [AM.ZK.Worker-pool-2-thread-10] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994854357, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {6ca2c5a98917cab87c982b4bbb7e0115 state=OPEN, ts=1373994854477, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,477 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(629): Onlined 2fd443c241020be67cc0d08d473f5134; next sequenceid=1 2013-07-16 17:14:14,477 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 6ca2c5a98917cab87c982b4bbb7e0115 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,478 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:14,478 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 6ca2c5a98917cab87c982b4bbb7e0115 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,480 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:14,481 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node f8146b196ac3399ee0b4bd5a227bd634 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,481 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => f8146b196ac3399ee0b4bd5a227bd634, NAME => 'test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.', STARTKEY => 'uuu', ENDKEY => 'vvv'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,481 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,482 INFO [StoreOpener-55d7e62280245f719c8f2cc61c586c64-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,484 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:14,485 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node f4cfa4d251af617b31eb11c76cc68678 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,485 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => f4cfa4d251af617b31eb11c76cc68678, NAME => 'test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.', STARTKEY => 'ccc', ENDKEY => 'ddd'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,485 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-1] handler.OpenRegionHandler(186): Opened test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,485 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=2fd443c241020be67cc0d08d473f5134, current state from region state map ={2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,486 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994854485, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,488 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=b9cbc55dd9bcb588274e2598633563b2, current state from region state map ={b9cbc55dd9bcb588274e2598633563b2 state=OPENING, ts=1373994854422, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,488 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(265): Transitioned from {b9cbc55dd9bcb588274e2598633563b2 state=OPENING, ts=1373994854422, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {b9cbc55dd9bcb588274e2598633563b2 state=OPEN, ts=1373994854488, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,488 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for b9cbc55dd9bcb588274e2598633563b2 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,489 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for b9cbc55dd9bcb588274e2598633563b2 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,489 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f8146b196ac3399ee0b4bd5a227bd634 2013-07-16 17:14:14,490 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:14,491 INFO [StoreOpener-55d7e62280245f719c8f2cc61c586c64-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:14,492 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=ba6e592748955d732d7843b9603163dc, current state from region state map ={ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994854430, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,492 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994854430, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {ba6e592748955d732d7843b9603163dc state=OPEN, ts=1373994854492, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,492 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for ba6e592748955d732d7843b9603163dc from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,493 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for ba6e592748955d732d7843b9603163dc that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,492 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:14,495 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:14,495 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1218): The znode of test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. has been deleted, region state: {d88c6958af6ef781dd9834d0369f4f70 state=OPEN, ts=1373994854457, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,495 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(301): Onlined d88c6958af6ef781dd9834d0369f4f70 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,495 INFO [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1223): The master has opened test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,496 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,497 INFO [RS_OPEN_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(629): Onlined 55d7e62280245f719c8f2cc61c586c64; next sequenceid=1 2013-07-16 17:14:14,497 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(598): regionserver:49955-0x13fe879789b0005 Attempting to retransition the opening state of node 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:14,498 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region d88c6958af6ef781dd9834d0369f4f70 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,500 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=55d7e62280245f719c8f2cc61c586c64, current state from region state map ={55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,500 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994853897, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994854500, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,500 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:14,501 DEBUG [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager$4(1218): The znode of test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. has been deleted, region state: {7050f74c0058e5a7a912d72a5fd1f4fa state=OPEN, ts=1373994854475, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,501 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 7050f74c0058e5a7a912d72a5fd1f4fa in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,501 INFO [AM.ZK.Worker-pool-2-thread-2] master.RegionStates(301): Onlined 7050f74c0058e5a7a912d72a5fd1f4fa on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,501 INFO [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager$4(1223): The master has opened test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,502 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=f8146b196ac3399ee0b4bd5a227bd634, current state from region state map ={f8146b196ac3399ee0b4bd5a227bd634 state=OPENING, ts=1373994854461, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,503 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(265): Transitioned from {f8146b196ac3399ee0b4bd5a227bd634 state=OPENING, ts=1373994854461, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {f8146b196ac3399ee0b4bd5a227bd634 state=OPEN, ts=1373994854503, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,503 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for f8146b196ac3399ee0b4bd5a227bd634 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:14,503 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for f8146b196ac3399ee0b4bd5a227bd634 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,504 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=f4cfa4d251af617b31eb11c76cc68678, current state from region state map ={f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994854449, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,504 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994854449, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {f4cfa4d251af617b31eb11c76cc68678 state=OPEN, ts=1373994854504, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,504 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for f4cfa4d251af617b31eb11c76cc68678 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,504 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for f4cfa4d251af617b31eb11c76cc68678 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,504 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:14,505 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager$4(1218): The znode of test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. has been deleted, region state: {6ca2c5a98917cab87c982b4bbb7e0115 state=OPEN, ts=1373994854477, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,505 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:14,505 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(301): Onlined 6ca2c5a98917cab87c982b4bbb7e0115 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,505 INFO [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager$4(1223): The master has opened test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,505 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 6ca2c5a98917cab87c982b4bbb7e0115 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,505 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] catalog.MetaEditor(432): Updated row test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,506 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:14,506 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 2fd443c241020be67cc0d08d473f5134 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,511 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:14,511 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1218): The znode of test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. has been deleted, region state: {b9cbc55dd9bcb588274e2598633563b2 state=OPEN, ts=1373994854488, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,511 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(301): Onlined b9cbc55dd9bcb588274e2598633563b2 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,511 INFO [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager$4(1223): The master has opened test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,511 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,512 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region b9cbc55dd9bcb588274e2598633563b2 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,512 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] catalog.MetaEditor(432): Updated row test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. with server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,512 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:14,512 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 2fd443c241020be67cc0d08d473f5134 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,512 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(786): regionserver:49955-0x13fe879789b0005 Attempting to transition node 55d7e62280245f719c8f2cc61c586c64 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,512 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 2fd443c241020be67cc0d08d473f5134, NAME => 'test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.', STARTKEY => 'hhh', ENDKEY => 'iii'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,513 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-2] handler.OpenRegionHandler(186): Opened test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,516 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:14,516 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1218): The znode of test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. has been deleted, region state: {ba6e592748955d732d7843b9603163dc state=OPEN, ts=1373994854492, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,517 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(301): Onlined ba6e592748955d732d7843b9603163dc on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,517 INFO [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1223): The master has opened test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,517 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region ba6e592748955d732d7843b9603163dc in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,517 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:14,518 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] zookeeper.ZKAssign(862): regionserver:49955-0x13fe879789b0005 Successfully transitioned node 55d7e62280245f719c8f2cc61c586c64 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:14,518 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 55d7e62280245f719c8f2cc61c586c64, NAME => 'test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.', STARTKEY => 'iii', ENDKEY => 'jjj'}, server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,519 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49955-0] handler.OpenRegionHandler(186): Opened test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. on server:ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,519 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/f8146b196ac3399ee0b4bd5a227bd634 2013-07-16 17:14:14,519 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,519 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager$4(1218): The znode of test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. has been deleted, region state: {f8146b196ac3399ee0b4bd5a227bd634 state=OPEN, ts=1373994854503, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:14,519 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(301): Onlined f8146b196ac3399ee0b4bd5a227bd634 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,519 INFO [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager$4(1223): The master has opened test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:14,519 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region f8146b196ac3399ee0b4bd5a227bd634 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,520 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:14,520 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:14,520 DEBUG [AM.ZK.Worker-pool-2-thread-8] master.AssignmentManager$4(1218): The znode of test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. has been deleted, region state: {f4cfa4d251af617b31eb11c76cc68678 state=OPEN, ts=1373994854504, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,521 INFO [AM.ZK.Worker-pool-2-thread-8] master.RegionStates(301): Onlined f4cfa4d251af617b31eb11c76cc68678 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,521 INFO [AM.ZK.Worker-pool-2-thread-8] master.AssignmentManager$4(1223): The master has opened test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,521 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region f4cfa4d251af617b31eb11c76cc68678 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,521 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=2fd443c241020be67cc0d08d473f5134, current state from region state map ={2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994854485, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,522 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994854485, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {2fd443c241020be67cc0d08d473f5134 state=OPEN, ts=1373994854522, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,522 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 2fd443c241020be67cc0d08d473f5134 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,522 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 2fd443c241020be67cc0d08d473f5134 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,522 DEBUG [AM.ZK.Worker-pool-2-thread-18] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, region=55d7e62280245f719c8f2cc61c586c64, current state from region state map ={55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994854500, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,523 INFO [AM.ZK.Worker-pool-2-thread-18] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994854500, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {55d7e62280245f719c8f2cc61c586c64 state=OPEN, ts=1373994854522, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,523 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for 55d7e62280245f719c8f2cc61c586c64 from ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; deleting unassigned node 2013-07-16 17:14:14,523 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 55d7e62280245f719c8f2cc61c586c64 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,526 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:14,527 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager$4(1218): The znode of test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. has been deleted, region state: {2fd443c241020be67cc0d08d473f5134 state=OPEN, ts=1373994854522, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,527 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(301): Onlined 2fd443c241020be67cc0d08d473f5134 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,527 INFO [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager$4(1223): The master has opened test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,527 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:14,528 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 2fd443c241020be67cc0d08d473f5134 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,528 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:14,529 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1218): The znode of test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. has been deleted, region state: {55d7e62280245f719c8f2cc61c586c64 state=OPEN, ts=1373994854522, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} 2013-07-16 17:14:14,529 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(301): Onlined 55d7e62280245f719c8f2cc61c586c64 on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,529 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 55d7e62280245f719c8f2cc61c586c64 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:14,529 INFO [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1223): The master has opened test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. that was online on ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:14,812 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,812 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,816 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,816 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:14,816 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 8 2013-07-16 17:14:14,817 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 8 2013-07-16 17:14:15,262 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,287 DEBUG [RpcServer.handler=3,port=50669] lock.ZKInterProcessLockBase(226): Acquired a lock for /2/table-lock/test/write-master:506690000000000 2013-07-16 17:14:15,293 DEBUG [RpcServer.handler=3,port=50669] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,298 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] handler.CreateTableHandler(146): Create table test 2013-07-16 17:14:15,305 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,313 INFO [IPC Server handler 4 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_8853704164920954173_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:15,313 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_8853704164920954173_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:15,319 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(4031): creating HRegion test HTD == 'test', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '1', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'norep', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:56710/user/ec2-user/hbase/.tmp Table name == test 2013-07-16 17:14:15,331 INFO [IPC Server handler 4 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-6155520018248751279_1017{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:15,332 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-6155520018248751279_1017 size 28 2013-07-16 17:14:15,333 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(534): Instantiated test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,333 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(965): Closing test,,1373994855276.f3fce37071716f89a509124ef3fd1288.: disabling compactions & flushes 2013-07-16 17:14:15,334 DEBUG [RegionOpenAndInitThread-test-1] regionserver.HRegion(987): Updates disabled for region test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,334 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion(1045): Closed test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,349 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] catalog.MetaEditor(254): Added 1 regions in META 2013-07-16 17:14:15,350 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] master.AssignmentManager(1503): Assigning 1 region(s) to ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,350 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] zookeeper.ZKAssign(177): master:50669-0x13fe879789b0011 Async create of unassigned node for f3fce37071716f89a509124ef3fd1288 with OFFLINE state 2013-07-16 17:14:15,352 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/region-in-transition 2013-07-16 17:14:15,352 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={f3fce37071716f89a509124ef3fd1288 state=OFFLINE, ts=1373994855350, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,354 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={f3fce37071716f89a509124ef3fd1288 state=OFFLINE, ts=1373994855350, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,355 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 unassigned znodes=1 of total=1 2013-07-16 17:14:15,355 INFO [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] master.RegionStates(265): Transitioned from {f3fce37071716f89a509124ef3fd1288 state=OFFLINE, ts=1373994855350, server=null} to {f3fce37071716f89a509124ef3fd1288 state=PENDING_OPEN, ts=1373994855355, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,356 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] master.ServerManager(735): New admin connection to ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,358 INFO [RpcServer.handler=1,port=39939] regionserver.HRegionServer(3455): Open test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,365 ERROR [IPC Server handler 0 on 49060] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.3 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.3 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:15,365 WARN [RpcServer.handler=1,port=39939] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:51438 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.3 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.3 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at java.io.DataInputStream.readFully(DataInputStream.java:152) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorModtime(FSTableDescriptors.java:429) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorModtime(FSTableDescriptors.java:414) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:169) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:132) at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3458) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14390) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.3 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 14 more 2013-07-16 17:14:15,371 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] zookeeper.ZKAssign(786): regionserver:39939-0x13fe879789b0013 Attempting to transition node f3fce37071716f89a509124ef3fd1288 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:15,371 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] master.AssignmentManager(1661): Bulk assigning done for ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,377 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] zookeeper.ZKAssign(862): regionserver:39939-0x13fe879789b0013 Successfully transitioned node f3fce37071716f89a509124ef3fd1288 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:15,377 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/region-in-transition/f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:15,377 INFO [RS_OPEN_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(4192): Open {ENCODED => f3fce37071716f89a509124ef3fd1288, NAME => 'test,,1373994855276.f3fce37071716f89a509124ef3fd1288.', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,378 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:15,378 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(534): Instantiated test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,379 DEBUG [AM.ZK.Worker-pool-13-thread-5] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, region=f3fce37071716f89a509124ef3fd1288, current state from region state map ={f3fce37071716f89a509124ef3fd1288 state=PENDING_OPEN, ts=1373994855355, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,379 INFO [AM.ZK.Worker-pool-13-thread-5] master.RegionStates(265): Transitioned from {f3fce37071716f89a509124ef3fd1288 state=PENDING_OPEN, ts=1373994855355, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} to {f3fce37071716f89a509124ef3fd1288 state=OPENING, ts=1373994855379, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,380 DEBUG [MASTER_TABLE_OPERATIONS-ip-10-197-55-49:50669-0] lock.ZKInterProcessLockBase(328): Released /2/table-lock/test/write-master:506690000000000 2013-07-16 17:14:15,385 INFO [StoreOpener-f3fce37071716f89a509124ef3fd1288-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:15,390 INFO [StoreOpener-f3fce37071716f89a509124ef3fd1288-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:15,394 INFO [RS_OPEN_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(629): Onlined f3fce37071716f89a509124ef3fd1288; next sequenceid=1 2013-07-16 17:14:15,394 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] zookeeper.ZKAssign(598): regionserver:39939-0x13fe879789b0013 Attempting to retransition the opening state of node f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:15,396 INFO [PostOpenDeployTasks:f3fce37071716f89a509124ef3fd1288] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,404 INFO [PostOpenDeployTasks:f3fce37071716f89a509124ef3fd1288] catalog.MetaEditor(432): Updated row test,,1373994855276.f3fce37071716f89a509124ef3fd1288. with server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,405 INFO [PostOpenDeployTasks:f3fce37071716f89a509124ef3fd1288] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:15,405 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] zookeeper.ZKAssign(786): regionserver:39939-0x13fe879789b0013 Attempting to transition node f3fce37071716f89a509124ef3fd1288 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:15,409 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/region-in-transition/f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:15,410 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] zookeeper.ZKAssign(862): regionserver:39939-0x13fe879789b0013 Successfully transitioned node f3fce37071716f89a509124ef3fd1288 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:15,410 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => f3fce37071716f89a509124ef3fd1288, NAME => 'test,,1373994855276.f3fce37071716f89a509124ef3fd1288.', STARTKEY => '', ENDKEY => ''}, server: ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,410 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:39939-0] handler.OpenRegionHandler(186): Opened test,,1373994855276.f3fce37071716f89a509124ef3fd1288. on server:ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,411 DEBUG [AM.ZK.Worker-pool-13-thread-6] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, region=f3fce37071716f89a509124ef3fd1288, current state from region state map ={f3fce37071716f89a509124ef3fd1288 state=OPENING, ts=1373994855379, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,412 INFO [AM.ZK.Worker-pool-13-thread-6] master.RegionStates(265): Transitioned from {f3fce37071716f89a509124ef3fd1288 state=OPENING, ts=1373994855379, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} to {f3fce37071716f89a509124ef3fd1288 state=OPEN, ts=1373994855412, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,412 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50669-1] handler.OpenedRegionHandler(145): Handling OPENED event for f3fce37071716f89a509124ef3fd1288 from ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314; deleting unassigned node 2013-07-16 17:14:15,412 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50669-1] zookeeper.ZKAssign(405): master:50669-0x13fe879789b0011 Deleting existing unassigned node for f3fce37071716f89a509124ef3fd1288 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:15,415 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,416 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/region-in-transition/f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:15,416 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/region-in-transition 2013-07-16 17:14:15,416 DEBUG [AM.ZK.Worker-pool-13-thread-7] master.AssignmentManager$4(1218): The znode of test,,1373994855276.f3fce37071716f89a509124ef3fd1288. has been deleted, region state: {f3fce37071716f89a509124ef3fd1288 state=OPEN, ts=1373994855412, server=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314} 2013-07-16 17:14:15,416 INFO [AM.ZK.Worker-pool-13-thread-7] master.RegionStates(301): Onlined f3fce37071716f89a509124ef3fd1288 on ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,417 INFO [AM.ZK.Worker-pool-13-thread-7] master.AssignmentManager$4(1223): The master has opened test,,1373994855276.f3fce37071716f89a509124ef3fd1288. that was online on ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:15,417 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50669-1] zookeeper.ZKAssign(434): master:50669-0x13fe879789b0011 Successfully deleted unassigned node for region f3fce37071716f89a509124ef3fd1288 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:15,453 DEBUG [pool-1-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,509 INFO [pool-1-thread-1] hbase.ResourceChecker(147): before: replication.TestReplicationQueueFailoverCompressed#queueFailover Thread=527, OpenFileDescriptor=769, MaxFileDescriptor=65536, SystemLoadAverage=165, ProcessCount=82, AvailableMemoryMB=7679, ConnectionCount=12 2013-07-16 17:14:15,509 WARN [pool-1-thread-1] hbase.ResourceChecker(134): Thread=527 is superior to 500 2013-07-16 17:14:15,511 INFO [Thread-595] replication.TestReplicationQueueFailover(61): Start loading table 2013-07-16 17:14:15,618 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,619 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,622 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,622 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 9 2013-07-16 17:14:15,641 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:15,674 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4a9a4ba3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:15,677 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4a9a4ba3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:15,678 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4a9a4ba3-0x13fe879789b001e connected 2013-07-16 17:14:15,697 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:15,697 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b001e 2013-07-16 17:14:15,756 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,762 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:15,844 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,851 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:15,915 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,923 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:15,982 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:15,989 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,053 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,060 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,124 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,142 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,227 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,235 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,403 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,408 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,409 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:16,421 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7055080296068517495_1069{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:16,422 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7055080296068517495_1069{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:16,426 INFO [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 with entries=209, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 2013-07-16 17:14:16,459 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,465 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20400, fileLength: 20408, trailerPresent: true 2013-07-16 17:14:16,525 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,532 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,565 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,577 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,581 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,586 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20400, fileLength: 20408, trailerPresent: true 2013-07-16 17:14:16,596 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,627 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,635 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,642 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,658 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,664 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,679 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,687 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,692 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:16,703 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,713 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,742 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,750 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,767 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:16,777 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,778 INFO [IPC Server handler 4 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-8077005520064081599_1019{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:16,780 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-8077005520064081599_1019{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:16,786 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 with entries=17, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 2013-07-16 17:14:16,787 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,795 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,804 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,808 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:16,853 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,858 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,890 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,897 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 51, total replicated edits: 320 2013-07-16 17:14:16,929 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,938 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:16,964 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:16,972 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,000 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,007 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,011 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,030 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,035 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 3 2013-07-16 17:14:17,079 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,088 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,129 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,139 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,167 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,179 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,339 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,346 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,349 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 4 2013-07-16 17:14:17,353 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,360 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,387 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,398 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,426 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,434 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,457 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,463 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,490 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,499 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,500 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:17,510 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 209 synced till here 208 2013-07-16 17:14:17,518 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-7924882966250519452_1072{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:17,520 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-7924882966250519452_1072{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:17,528 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,531 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:17,532 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:17,535 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20377, fileLength: 20385, trailerPresent: true 2013-07-16 17:14:17,537 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 with entries=209, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 2013-07-16 17:14:17,571 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,572 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20377, fileLength: 20385, trailerPresent: true 2013-07-16 17:14:17,599 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,606 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,628 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,633 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,645 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,652 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,654 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:17,751 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,755 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,756 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,761 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,763 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:17,796 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,800 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,819 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,823 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,840 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,844 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,855 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:17,864 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,868 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-5192519907720022186_1021{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:17,869 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-5192519907720022186_1021 size 20465 2013-07-16 17:14:17,872 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 with entries=21, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 2013-07-16 17:14:17,873 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,897 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,903 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,923 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,928 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,950 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,958 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,965 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,978 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:17,980 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 3 2013-07-16 17:14:17,981 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:17,986 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,005 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,018 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,042 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,048 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,065 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,071 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,083 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:18,093 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,103 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7871647707537163371_1071{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:18,104 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7871647707537163371_1071{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:18,107 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20760, fileLength: 20768, trailerPresent: true 2013-07-16 17:14:18,108 INFO [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 with entries=212, filesize=20.3 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 2013-07-16 17:14:18,126 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,129 ERROR [IPC Server handler 0 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.1 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:18,130 WARN [RS:1;ip-10-197-55-49:49955.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:18,133 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20760, fileLength: 20768, trailerPresent: true 2013-07-16 17:14:18,156 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,161 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,183 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,191 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,213 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,225 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,251 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,257 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,283 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,286 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,292 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,304 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,304 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:18,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 235, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 at position: N/A 2013-07-16 17:14:18,313 INFO [ip-10-197-55-49:49955Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 470, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 at position: N/A 2013-07-16 17:14:18,329 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,334 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,357 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,362 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,384 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,388 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,406 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,418 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,419 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,432 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,436 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:18,465 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,476 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,506 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,513 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:18,556 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:18,566 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,438 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,455 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,458 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 3 2013-07-16 17:14:19,461 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,473 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,491 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,497 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,505 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:19,512 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,518 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-8186546302819868262_1023{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:19,519 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-8186546302819868262_1023 size 20361 2013-07-16 17:14:19,521 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 with entries=23, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 2013-07-16 17:14:19,522 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,568 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,580 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,600 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:19,605 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,609 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,613 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 419 synced till here 418 2013-07-16 17:14:19,617 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8364054660284748210_1074{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:19,619 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8364054660284748210_1074{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:19,622 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 with entries=210, filesize=20.2 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 2013-07-16 17:14:19,627 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,629 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:19,630 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:19,635 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20633, fileLength: 20641, trailerPresent: true 2013-07-16 17:14:19,668 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,669 ERROR [IPC Server handler 2 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:19,671 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:19,673 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20633, fileLength: 20641, trailerPresent: true 2013-07-16 17:14:19,693 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,698 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,713 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,720 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,733 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,739 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,752 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,756 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,760 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,766 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,768 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 4 2013-07-16 17:14:19,771 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,774 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,784 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,787 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,790 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:19,892 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,896 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,919 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,923 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,938 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,942 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,957 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,962 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,973 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,977 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:19,989 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:19,993 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,006 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,010 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,023 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,027 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,040 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,059 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,075 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,079 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,098 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,103 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:20,103 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,110 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 627 synced till here 626 2013-07-16 17:14:20,117 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-4977846702945645744_1078{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 20376 2013-07-16 17:14:20,118 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-4977846702945645744_1078 size 20376 2013-07-16 17:14:20,127 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,129 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20368, fileLength: 20376, trailerPresent: true 2013-07-16 17:14:20,142 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,144 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20368, fileLength: 20376, trailerPresent: true 2013-07-16 17:14:20,149 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:20,170 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,174 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,176 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 5 2013-07-16 17:14:20,251 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,253 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20368, fileLength: 20376, trailerPresent: true 2013-07-16 17:14:20,258 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:20,461 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,463 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20368, fileLength: 20376, trailerPresent: true 2013-07-16 17:14:20,468 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 3 2013-07-16 17:14:20,522 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 with entries=208, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 2013-07-16 17:14:20,678 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,682 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,684 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 6 2013-07-16 17:14:20,771 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,773 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20368, fileLength: 20376, trailerPresent: true 2013-07-16 17:14:20,793 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,799 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,830 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:20,843 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-8552101543656509697_1025{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:20,844 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-8552101543656509697_1025 size 27533 2013-07-16 17:14:20,845 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,846 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 with entries=22, filesize=26.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994860830 2013-07-16 17:14:20,849 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:20,889 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:20,894 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,009 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,015 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,034 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,059 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,076 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,088 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,096 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:21,104 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 835 synced till here 834 2013-07-16 17:14:21,107 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,108 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_1355218682590825264_1080{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:21,109 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_1355218682590825264_1080{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:21,115 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 with entries=208, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 2013-07-16 17:14:21,115 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20460, fileLength: 20468, trailerPresent: true 2013-07-16 17:14:21,128 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,129 ERROR [IPC Server handler 9 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:21,130 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:21,135 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20460, fileLength: 20468, trailerPresent: true 2013-07-16 17:14:21,160 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,166 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,178 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,191 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,202 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,220 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,233 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,249 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,270 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,282 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,286 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,297 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,299 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,299 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 7 2013-07-16 17:14:21,305 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,330 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,344 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,359 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,364 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,376 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,383 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,393 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,399 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,411 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,423 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,428 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:21,530 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,547 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,550 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:21,684 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:21,696 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8549119146580241963_1076{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:21,697 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8549119146580241963_1076{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:21,701 INFO [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 with entries=208, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 2013-07-16 17:14:21,752 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:21,765 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:21,767 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 3 2013-07-16 17:14:21,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 515, total replicated edits: 1410 2013-07-16 17:14:22,002 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,003 ERROR [IPC Server handler 5 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.1 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:22,006 WARN [RS:1;ip-10-197-55-49:49955.replicationSource,2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:490) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:299) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.1 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 11 more 2013-07-16 17:14:22,009 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20437, fileLength: 20445, trailerPresent: true 2013-07-16 17:14:22,069 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,076 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:22,080 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:22,082 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 4 2013-07-16 17:14:22,090 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-3102627685678986704_1027{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 24466 2013-07-16 17:14:22,091 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-3102627685678986704_1027 size 24466 2013-07-16 17:14:22,094 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,096 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20437, fileLength: 20445, trailerPresent: true 2013-07-16 17:14:22,126 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,136 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:22,144 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:22,153 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 839 synced till here 838 2013-07-16 17:14:22,159 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6522423329968899971_1084{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:22,161 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6522423329968899971_1084 size 20595 2013-07-16 17:14:22,164 INFO [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 with entries=210, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:22,485 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,502 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994860830 with entries=17, filesize=23.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862076 2013-07-16 17:14:22,505 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:22,507 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 5 2013-07-16 17:14:22,520 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,523 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20587, fileLength: 20595, trailerPresent: true 2013-07-16 17:14:22,565 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,567 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20587, fileLength: 20595, trailerPresent: true 2013-07-16 17:14:22,595 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,602 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:22,603 DEBUG [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:22,636 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_2076358866708010836_1086{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 20958 2013-07-16 17:14:22,637 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_2076358866708010836_1086 size 20958 2013-07-16 17:14:22,658 DEBUG [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:22,666 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,668 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20950, fileLength: 20958, trailerPresent: true 2013-07-16 17:14:22,676 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-5602611130235483651_1029{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:22,676 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-5602611130235483651_1029{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:22,681 INFO [RS:1;ip-10-197-55-49:39939.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862076 with entries=3, filesize=28.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 2013-07-16 17:14:22,695 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,697 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20950, fileLength: 20958, trailerPresent: true 2013-07-16 17:14:22,701 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 1 2013-07-16 17:14:22,803 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:22,806 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20950, fileLength: 20958, trailerPresent: true 2013-07-16 17:14:23,002 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 2 2013-07-16 17:14:23,009 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,014 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:23,016 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 6 2013-07-16 17:14:23,020 INFO [Thread-596-EventThread] hbase.HBaseTestingUtility$1(1887): Monitor ZKW received event=WatchedEvent state:SyncConnected type:None path:null 2013-07-16 17:14:23,048 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:23,048 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:23,048 INFO [pool-1-thread-1-EventThread] zookeeper.RegionServerTracker(94): RegionServer ephemeral node deleted, processing expiration [ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] 2013-07-16 17:14:23,049 INFO [Thread-596] hbase.HBaseTestingUtility(1903): ZK Closed Session 0x13fe879789b0005 2013-07-16 17:14:23,050 INFO [RS:1;ip-10-197-55-49:49955.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 with entries=213, filesize=20.5 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 2013-07-16 17:14:23,051 DEBUG [pool-1-thread-1-EventThread] master.AssignmentManager(3032): based on AM, current region=.META.,,1.1028785192 is on server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 server being checked: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:23,055 DEBUG [pool-1-thread-1-EventThread] master.ServerManager(510): Added=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 to dead servers, submitted shutdown handler to be executed meta=false 2013-07-16 17:14:23,058 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2013-07-16 17:14:23,060 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,060 INFO [RS:0;ip-10-197-55-49:49041-EventThread] regionserver.ReplicationSourceManager$OtherRegionServerWatcher(450): /1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 znode expired, trying to lock it 2013-07-16 17:14:23,062 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2013-07-16 17:14:23,063 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50904-0x13fe879789b0004 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,065 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZKUtil(431): regionserver:49041-0x13fe879789b0006 Set watcher on existing znode=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,081 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:23,085 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] handler.ServerShutdownHandler(185): Splitting logs for ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 before assignment. 2013-07-16 17:14:23,090 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.MasterFileSystem(338): Renamed region directory: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting 2013-07-16 17:14:23,091 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.SplitLogManager(1303): dead splitlog workers [ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] 2013-07-16 17:14:23,094 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.SplitLogManager(305): Scheduling batch of logs to split 2013-07-16 17:14:23,094 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.SplitLogManager(307): started splitting 6 logs in [hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting] 2013-07-16 17:14:23,103 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/splitlog 2013-07-16 17:14:23,104 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:23,104 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,108 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,109 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,111 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/splitlog 2013-07-16 17:14:23,112 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:23,115 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,119 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49955-0x13fe879789b0005 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:14:23,119 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(389): regionserver:49955-0x13fe879789b0005 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:14:23,123 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,124 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:23,126 INFO [Thread-596-EventThread] hbase.HBaseTestingUtility$1(1887): Monitor ZKW received event=WatchedEvent state:Disconnected type:None path:null 2013-07-16 17:14:23,129 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 ver = 0 2013-07-16 17:14:23,129 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:23,129 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 ver = 0 2013-07-16 17:14:23,129 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 ver = 0 2013-07-16 17:14:23,130 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,131 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083, length=20445 2013-07-16 17:14:23,131 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:23,131 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 ver = 0 2013-07-16 17:14:23,133 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 ver = 0 2013-07-16 17:14:23,134 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 ver = 0 2013-07-16 17:14:23,134 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,135 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,136 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,137 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 2013-07-16 17:14:23,145 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 after 8ms 2013-07-16 17:14:23,147 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,148 ERROR [IPC Server handler 8 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:23,149 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:23,153 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20437, fileLength: 20445, trailerPresent: true 2013-07-16 17:14:23,162 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:23,163 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:23,162 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:23,203 ERROR [IPC Server handler 1 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.1 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:23,203 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:23,203 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:23,206 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000423.temp region=f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:23,208 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 3 2013-07-16 17:14:23,214 DEBUG [WriterThread-1] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000472.temp region=d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:23,221 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:23,250 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8337894913240143259_1092{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,252 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8337894913240143259_1092{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,253 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3061601612102510821_1091{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,254 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3061601612102510821_1091{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,256 INFO [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000423.temp (wrote 49 edits in 43ms) 2013-07-16 17:14:23,258 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000472.temp (wrote 159 edits in 49ms) 2013-07-16 17:14:23,271 DEBUG [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000423.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000471 2013-07-16 17:14:23,272 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000472.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000630 2013-07-16 17:14:23,273 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 208 edits across 2 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 is corrupted = false progress failed = false 2013-07-16 17:14:23,276 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,276 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,277 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,277 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 in 162ms 2013-07-16 17:14:23,277 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,280 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,283 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409, length=20768 2013-07-16 17:14:23,283 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:23,286 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,287 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 2013-07-16 17:14:23,289 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 after 0ms 2013-07-16 17:14:23,290 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 2013-07-16 17:14:23,291 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,292 ERROR [IPC Server handler 9 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:23,293 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,294 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,296 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:23,300 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,300 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994858083 2013-07-16 17:14:23,300 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,301 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20760, fileLength: 20768, trailerPresent: true 2013-07-16 17:14:23,311 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:23,314 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A 2013-07-16 17:14:23,314 INFO [ip-10-197-55-49:49955Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 1052, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:23,316 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:23,317 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:23,349 DEBUG [WriterThread-1] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000211.temp region=64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:23,349 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000237.temp region=f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:23,369 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:23,369 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:23,371 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:23,392 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7656196182600639896_1095{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,393 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_293337223029028366_1096{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,394 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_293337223029028366_1096{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,395 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7656196182600639896_1095 size 2554 2013-07-16 17:14:23,395 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000211.temp (wrote 26 edits in 36ms) 2013-07-16 17:14:23,397 INFO [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000237.temp (wrote 186 edits in 31ms) 2013-07-16 17:14:23,400 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000211.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000236 2013-07-16 17:14:23,402 DEBUG [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000237.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000422 2013-07-16 17:14:23,402 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 212 edits across 2 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 is corrupted = false progress failed = false 2013-07-16 17:14:23,404 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,404 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,405 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,405 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 in 125ms 2013-07-16 17:14:23,406 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,408 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,410 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150, length=20408 2013-07-16 17:14:23,410 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:23,412 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,412 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 2013-07-16 17:14:23,415 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 after 3ms 2013-07-16 17:14:23,417 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,417 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 2013-07-16 17:14:23,418 ERROR [IPC Server handler 0 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:23,418 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,419 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,419 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:23,421 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,421 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994856409 2013-07-16 17:14:23,422 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,422 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20400, fileLength: 20408, trailerPresent: true 2013-07-16 17:14:23,438 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 4 unassigned = 3 2013-07-16 17:14:23,446 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:23,453 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:23,464 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:23,475 DEBUG [RpcServer.handler=4,port=50904] master.ServerManager(336): Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server 2013-07-16 17:14:23,478 FATAL [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1752): ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) org.apache.hadoop.hbase.exceptions.YouAreDeadException: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1001) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:839) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.YouAreDeadException): org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerReport(RegionServerStatusProtos.java:4095) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:999) ... 17 more 2013-07-16 17:14:23,479 FATAL [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1760): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:14:23,482 ERROR [RpcServer.handler=1,port=50904] master.HMaster(1283): Region server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 reported a fatal error: ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) Cause: org.apache.hadoop.hbase.exceptions.YouAreDeadException: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1001) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:839) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.YouAreDeadException): org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerReport(RegionServerStatusProtos.java:4095) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:999) ... 17 more 2013-07-16 17:14:23,484 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1685): STOPPED: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:23,495 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000002.temp region=64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:23,497 INFO [RS:1;ip-10-197-55-49:49955] regionserver.SplitLogWorker(596): Sending interrupt to stop the worker thread 2013-07-16 17:14:23,499 INFO [Thread-160] regionserver.MemStoreFlusher$FlushHandler(267): Thread-160 exiting 2013-07-16 17:14:23,500 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.SplitLogWorker(281): SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 2013-07-16 17:14:23,500 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.SplitLogWorker(205): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 exiting 2013-07-16 17:14:23,501 INFO [RS:1;ip-10-197-55-49:49955.periodicFlusher] hbase.Chore(93): RS:1;ip-10-197-55-49:49955.periodicFlusher exiting 2013-07-16 17:14:23,501 INFO [RS:1;ip-10-197-55-49:49955.leaseChecker] regionserver.Leases(124): RS:1;ip-10-197-55-49:49955.leaseChecker closing leases 2013-07-16 17:14:23,502 INFO [RS:1;ip-10-197-55-49:49955.leaseChecker] regionserver.Leases(131): RS:1;ip-10-197-55-49:49955.leaseChecker closed leases 2013-07-16 17:14:23,505 INFO [RS:1;ip-10-197-55-49:49955] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager abruptly. 2013-07-16 17:14:23,505 INFO [RS:1;ip-10-197-55-49:49955.compactionChecker] hbase.Chore(93): RS:1;ip-10-197-55-49:49955.compactionChecker exiting 2013-07-16 17:14:23,506 INFO [RS:1;ip-10-197-55-49:49955.logRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:14:23,506 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(125): Processing close of test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:23,507 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(965): Closing test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.: disabling compactions & flushes 2013-07-16 17:14:23,507 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(987): Updates disabled for region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:23,508 DEBUG [RS:1;ip-10-197-55-49:49955.replicationSource,2] regionserver.ReplicationSource(386): Source exiting 2 2013-07-16 17:14:23,510 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:23,510 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:23,514 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(125): Processing close of test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:23,514 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(965): Closing test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.: disabling compactions & flushes 2013-07-16 17:14:23,514 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(905): aborting server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:23,514 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(987): Updates disabled for region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:23,514 DEBUG [RS:1;ip-10-197-55-49:49955] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@208c5a4f 2013-07-16 17:14:23,515 INFO [RS:1;ip-10-197-55-49:49955] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0009 2013-07-16 17:14:23,515 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(125): Processing close of test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:23,515 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(965): Closing test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.: disabling compactions & flushes 2013-07-16 17:14:23,515 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(987): Updates disabled for region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:23,517 INFO [RS:1;ip-10-197-55-49:49955] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager abruptly. 2013-07-16 17:14:23,517 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1076): Waiting on 13 regions to close 2013-07-16 17:14:23,517 INFO [StoreCloserThread-test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,522 INFO [StoreCloserThread-test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,521 INFO [StoreCloserThread-test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,523 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:23,524 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(1045): Closed test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:23,524 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(177): Closed region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:23,524 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(125): Processing close of test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:23,524 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(965): Closing test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.: disabling compactions & flushes 2013-07-16 17:14:23,524 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(987): Updates disabled for region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:23,523 INFO [StoreCloserThread-test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,527 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(1045): Closed test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:23,527 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(177): Closed region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:23,527 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(125): Processing close of test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:23,528 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(965): Closing test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.: disabling compactions & flushes 2013-07-16 17:14:23,528 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(987): Updates disabled for region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:23,529 INFO [StoreCloserThread-test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,529 INFO [StoreCloserThread-test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,529 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(1045): Closed test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:23,529 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(177): Closed region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:23,529 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(125): Processing close of test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:23,529 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(965): Closing test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.: disabling compactions & flushes 2013-07-16 17:14:23,530 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(987): Updates disabled for region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:23,530 INFO [StoreCloserThread-test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,530 INFO [StoreCloserThread-test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,532 INFO [StoreCloserThread-test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,532 INFO [StoreCloserThread-test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,537 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(1045): Closed test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:23,537 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(177): Closed region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:23,538 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(125): Processing close of test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:23,538 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(965): Closing test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.: disabling compactions & flushes 2013-07-16 17:14:23,538 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(987): Updates disabled for region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:23,529 INFO [StoreCloserThread-test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,538 INFO [StoreCloserThread-test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,539 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(1045): Closed test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:23,540 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(177): Closed region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:23,539 INFO [StoreCloserThread-test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,540 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(125): Processing close of test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:23,540 INFO [StoreCloserThread-test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,540 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(965): Closing test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.: disabling compactions & flushes 2013-07-16 17:14:23,540 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(987): Updates disabled for region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:23,540 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(1045): Closed test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:23,540 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(177): Closed region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:23,541 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(125): Processing close of test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:23,541 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(965): Closing test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.: disabling compactions & flushes 2013-07-16 17:14:23,541 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(987): Updates disabled for region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:23,541 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(1045): Closed test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:23,542 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(177): Closed region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:23,542 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(125): Processing close of test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:23,542 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(965): Closing test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.: disabling compactions & flushes 2013-07-16 17:14:23,542 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(987): Updates disabled for region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:23,543 INFO [StoreCloserThread-test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,543 INFO [StoreCloserThread-test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,543 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(1045): Closed test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:23,543 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(177): Closed region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:23,543 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(125): Processing close of test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:23,544 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(965): Closing test,nnn,1373994853026.093d3ef494905701450f33a487333200.: disabling compactions & flushes 2013-07-16 17:14:23,544 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(987): Updates disabled for region test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:23,544 INFO [StoreCloserThread-test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,544 INFO [StoreCloserThread-test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,544 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5219464004133508689_1098{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,545 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(1045): Closed test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:23,545 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5219464004133508689_1098{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,545 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(177): Closed region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:23,546 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(125): Processing close of test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:23,546 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(965): Closing test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.: disabling compactions & flushes 2013-07-16 17:14:23,546 INFO [StoreCloserThread-test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,546 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(987): Updates disabled for region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:23,547 INFO [StoreCloserThread-test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,547 INFO [StoreCloserThread-test,nnn,1373994853026.093d3ef494905701450f33a487333200.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,547 INFO [StoreCloserThread-test,nnn,1373994853026.093d3ef494905701450f33a487333200.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,548 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(1045): Closed test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:23,548 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(177): Closed region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:23,548 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(125): Processing close of test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:23,548 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(965): Closing test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.: disabling compactions & flushes 2013-07-16 17:14:23,548 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(987): Updates disabled for region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:23,549 INFO [StoreCloserThread-test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,549 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000002.temp (wrote 209 edits in 40ms) 2013-07-16 17:14:23,549 INFO [StoreCloserThread-test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,549 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] regionserver.HRegion(1045): Closed test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:23,549 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-1] handler.CloseRegionHandler(177): Closed region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:23,550 INFO [StoreCloserThread-test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:23,550 INFO [StoreCloserThread-test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:23,550 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] regionserver.HRegion(1045): Closed test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:23,550 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-2] handler.CloseRegionHandler(177): Closed region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:23,548 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] regionserver.HRegion(1045): Closed test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:23,553 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49955-0] handler.CloseRegionHandler(177): Closed region test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:23,554 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000002.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000210 2013-07-16 17:14:23,554 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 209 edits across 1 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 is corrupted = false progress failed = false 2013-07-16 17:14:23,557 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,557 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,558 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,558 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 in 150ms 2013-07-16 17:14:23,559 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,567 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 2013-07-16 17:14:23,569 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,569 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,571 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,572 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684, length=20595 2013-07-16 17:14:23,572 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:23,573 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,573 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994848150 2013-07-16 17:14:23,574 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,574 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/splitlog 2013-07-16 17:14:23,574 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:23,576 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,577 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,578 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 2013-07-16 17:14:23,579 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 after 1ms 2013-07-16 17:14:23,581 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,582 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:23,584 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:23,588 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20587, fileLength: 20595, trailerPresent: true 2013-07-16 17:14:23,595 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:23,595 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:23,595 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:23,622 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:23,623 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:23,626 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:23,627 DEBUG [WriterThread-0] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000706.temp region=2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:23,644 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000631.temp region=d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:23,651 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:23,652 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:23,654 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 7 2013-07-16 17:14:23,662 WARN [hbase-table-pool-62-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:23,662 WARN [hbase-table-pool-62-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:23,671 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3049710619349538948_1101{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,672 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3049710619349538948_1101{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,697 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-7590496273978834333_1102{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,698 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-7590496273978834333_1102{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:23,699 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000706.temp (wrote 134 edits in 23ms) 2013-07-16 17:14:23,701 INFO [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000631.temp (wrote 76 edits in 53ms) 2013-07-16 17:14:23,704 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000706.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000840 2013-07-16 17:14:23,717 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(935): stopping server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; all regions closed. 2013-07-16 17:14:23,718 INFO [RS:1;ip-10-197-55-49:49955.logSyncer] wal.FSHLog$LogSyncer(966): RS:1;ip-10-197-55-49:49955.logSyncer exiting 2013-07-16 17:14:23,718 DEBUG [RS:1;ip-10-197-55-49:49955] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:23,721 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3944365946368308261_1088{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,722 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3944365946368308261_1088{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:23,723 ERROR [IPC Server handler 6 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.1 (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604: File does not exist. [Lease. Holder: DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790_-776990866_219, pendingcreates: 1] 2013-07-16 17:14:23,727 ERROR [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(1128): Close and delete failed org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604: File does not exist. [Lease. Holder: DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790_-776990866_219, pendingcreates: 1] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2398) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2390) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2455) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2432) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:546) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:389) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40748) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:97) at org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:49) at org.apache.hadoop.hbase.regionserver.HRegionServer.closeWAL(HRegionServer.java:1128) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:941) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:23,741 DEBUG [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000631.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000707 2013-07-16 17:14:23,741 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 210 edits across 2 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 is corrupted = false progress failed = false 2013-07-16 17:14:23,744 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,744 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,744 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 in 173ms 2013-07-16 17:14:23,747 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,748 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:23,749 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,751 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:23,754 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604, length=0 2013-07-16 17:14:23,754 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:23,758 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:23,759 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(831): File hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 might be still open, length is 0 2013-07-16 17:14:23,759 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 2013-07-16 17:14:23,759 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 2013-07-16 17:14:23,761 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockInfoUnderConstruction(248): BLOCK* blk_3944365946368308261_1088{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} recovery started, primary=127.0.0.1:39876 2013-07-16 17:14:23,762 WARN [IPC Server handler 9 on 43175] namenode.FSNamesystem(3135): DIR* NameSystem.internalReleaseLease: File /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 has not been closed. Lease recovery is in progress. RecoveryId = 1103 for block blk_3944365946368308261_1088{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} 2013-07-16 17:14:23,763 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,763 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=false, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 after 4ms 2013-07-16 17:14:23,764 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:23,764 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:23,765 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,765 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994861684 2013-07-16 17:14:23,828 INFO [RS:1;ip-10-197-55-49:49955] regionserver.Leases(124): RS:1;ip-10-197-55-49:49955 closing leases 2013-07-16 17:14:23,829 INFO [RS:1;ip-10-197-55-49:49955] regionserver.Leases(131): RS:1;ip-10-197-55-49:49955 closed leases 2013-07-16 17:14:23,829 INFO [RS:1;ip-10-197-55-49:49955] regionserver.CompactSplitThread(356): Waiting for Split Thread to finish... 2013-07-16 17:14:23,829 INFO [RS:1;ip-10-197-55-49:49955] regionserver.CompactSplitThread(356): Waiting for Merge Thread to finish... 2013-07-16 17:14:23,829 INFO [RS:1;ip-10-197-55-49:49955] regionserver.CompactSplitThread(356): Waiting for Large Compaction Thread to finish... 2013-07-16 17:14:23,829 INFO [RS:1;ip-10-197-55-49:49955] regionserver.CompactSplitThread(356): Waiting for Small Compaction Thread to finish... 2013-07-16 17:14:23,830 INFO [RS:1;ip-10-197-55-49:49955] regionserver.ReplicationSource(756): Closing source 2 because: Region server is closing 2013-07-16 17:14:23,830 INFO [RS:1;ip-10-197-55-49:49955] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b000e 2013-07-16 17:14:24,194 DEBUG [hbase-table-pool-62-thread-2] client.AsyncProcess(603): Attempt #4/35 failed for 3 operations on server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, resubmitting 3, tableName=test, location=region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc., hostname=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, seqNum=1, last exception was: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:49955 - sleeping 500 ms. 2013-07-16 17:14:24,357 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:24,361 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:24,363 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 8 2013-07-16 17:14:24,436 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 2 unassigned = 1 2013-07-16 17:14:24,707 DEBUG [hbase-table-pool-62-thread-1] client.AsyncProcess(603): Attempt #5/35 failed for 3 operations on server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, resubmitting 3, tableName=test, location=region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc., hostname=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, seqNum=1, last exception was: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:49955 - sleeping 1002 ms. 2013-07-16 17:14:24,893 WARN [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:24,894 INFO [RS:1;ip-10-197-55-49:49955] util.RetryCounter(54): Sleeping 20ms before retry #1... 2013-07-16 17:14:25,165 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:25,170 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:25,172 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 9 2013-07-16 17:14:25,436 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 2 unassigned = 1 2013-07-16 17:14:25,515 INFO [Thread-597-EventThread] hbase.HBaseTestingUtility$1(1887): Monitor ZKW received event=WatchedEvent state:SyncConnected type:None path:null 2013-07-16 17:14:25,523 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:25,523 INFO [pool-1-thread-1-EventThread] zookeeper.RegionServerTracker(94): RegionServer ephemeral node deleted, processing expiration [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] 2013-07-16 17:14:25,523 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:25,524 INFO [Thread-597] hbase.HBaseTestingUtility(1903): ZK Closed Session 0x13fe879789b0013 2013-07-16 17:14:25,526 DEBUG [pool-1-thread-1-EventThread] master.AssignmentManager(3032): based on AM, current region=.META.,,1.1028785192 is on server=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 server being checked: ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:25,527 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZKUtil(431): regionserver:55133-0x13fe879789b0012 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,527 INFO [RS:0;ip-10-197-55-49:55133-EventThread] regionserver.ReplicationSourceManager$OtherRegionServerWatcher(450): /2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 znode expired, trying to lock it 2013-07-16 17:14:25,530 DEBUG [pool-1-thread-1-EventThread] master.ServerManager(510): Added=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 to dead servers, submitted shutdown handler to be executed meta=false 2013-07-16 17:14:25,530 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:14:25,532 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(431): master:50669-0x13fe879789b0011 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,534 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:14:25,537 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZKUtil(431): regionserver:55133-0x13fe879789b0012 Set watcher on existing znode=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,550 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:25,550 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] handler.ServerShutdownHandler(185): Splitting logs for ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 before assignment. 2013-07-16 17:14:25,556 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.MasterFileSystem(338): Renamed region directory: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting 2013-07-16 17:14:25,556 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.SplitLogManager(1303): dead splitlog workers [ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] 2013-07-16 17:14:25,559 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.SplitLogManager(305): Scheduling batch of logs to split 2013-07-16 17:14:25,559 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.SplitLogManager(307): started splitting 7 logs in [hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting] 2013-07-16 17:14:25,561 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/splitlog 2013-07-16 17:14:25,561 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:25,562 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,563 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,564 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,565 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,568 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/splitlog 2013-07-16 17:14:25,568 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:25,571 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994860830 2013-07-16 17:14:25,572 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 ver = 0 2013-07-16 17:14:25,573 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862076 2013-07-16 17:14:25,573 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 acquired task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,574 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(686): put up splitlog task at znode /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:25,576 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 ver = 0 2013-07-16 17:14:25,576 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505, length=27533 2013-07-16 17:14:25,576 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:25,577 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 ver = 0 2013-07-16 17:14:25,577 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 ver = 0 2013-07-16 17:14:25,578 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,578 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,579 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994860830 ver = 0 2013-07-16 17:14:25,580 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 2013-07-16 17:14:25,580 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862076 ver = 0 2013-07-16 17:14:25,581 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 ver = 0 2013-07-16 17:14:25,581 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 after 1ms 2013-07-16 17:14:25,582 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 acquired by ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,583 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:25,586 ERROR [IPC Server handler 1 on 49060] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.2 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:25,587 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:51438 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:25,592 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 27525, fileLength: 27533, trailerPresent: true 2013-07-16 17:14:25,597 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:25,598 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:25,601 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:25,615 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:39939-0x13fe879789b0013 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:14:25,615 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(389): regionserver:39939-0x13fe879789b0013 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:14:25,616 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000063.temp region=f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:25,620 INFO [Thread-597-EventThread] hbase.HBaseTestingUtility$1(1887): Monitor ZKW received event=WatchedEvent state:Disconnected type:None path:null 2013-07-16 17:14:25,632 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:25,632 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:25,634 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:25,643 INFO [IPC Server handler 1 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_4065590505979657028_1033{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:25,644 INFO [IPC Server handler 3 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_4065590505979657028_1033{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:25,647 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000063.temp (wrote 22 edits in 16ms) 2013-07-16 17:14:25,651 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000063.temp to hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000084 2013-07-16 17:14:25,651 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(601): Processed 22 edits across 1 regions; log file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 is corrupted = false progress failed = false 2013-07-16 17:14:25,653 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,654 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,654 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(462): successfully transitioned task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,654 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 done with task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 in 81ms 2013-07-16 17:14:25,656 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,659 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 acquired task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,660 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768, length=20465 2013-07-16 17:14:25,661 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:25,665 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,665 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 2013-07-16 17:14:25,670 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 to hdfs://localhost:56710/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994859505 2013-07-16 17:14:25,670 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 after 4ms 2013-07-16 17:14:25,671 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,672 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,673 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:25,674 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,675 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994859505 2013-07-16 17:14:25,675 ERROR [IPC Server handler 2 on 49060] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.2 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:25,675 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 acquired by ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,678 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:51438 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:25,682 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20457, fileLength: 20465, trailerPresent: true 2013-07-16 17:14:25,687 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:25,687 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:25,692 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:25,705 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000019.temp region=f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:25,717 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:25,717 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:25,724 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:25,737 WARN [hbase-table-pool-62-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:25,737 WARN [hbase-table-pool-62-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:25,738 DEBUG [hbase-table-pool-62-thread-2] client.AsyncProcess(603): Attempt #6/35 failed for 3 operations on server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, resubmitting 3, tableName=test, location=region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc., hostname=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, seqNum=1, last exception was: java.net.ConnectException: Connection refused - sleeping 10019 ms. 2013-07-16 17:14:25,746 INFO [IPC Server handler 7 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_6467943994343019868_1035{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:25,747 INFO [IPC Server handler 9 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_6467943994343019868_1035{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:25,749 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000019.temp (wrote 21 edits in 29ms) 2013-07-16 17:14:25,752 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000019.temp to hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000039 2013-07-16 17:14:25,753 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(601): Processed 21 edits across 1 regions; log file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 is corrupted = false progress failed = false 2013-07-16 17:14:25,754 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,754 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,756 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(462): successfully transitioned task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,756 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,756 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 done with task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 in 97ms 2013-07-16 17:14:25,761 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 acquired task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,763 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781, length=20601 2013-07-16 17:14:25,763 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:25,763 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 to hdfs://localhost:56710/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994856768 2013-07-16 17:14:25,764 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,765 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,765 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,765 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 2013-07-16 17:14:25,766 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 after 1ms 2013-07-16 17:14:25,768 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:25,768 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,769 ERROR [IPC Server handler 3 on 49060] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.2 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:25,771 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994856768 2013-07-16 17:14:25,772 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 acquired by ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,775 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:51438 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:25,780 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20593, fileLength: 20601, trailerPresent: true 2013-07-16 17:14:25,785 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:25,786 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:25,787 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:25,801 DEBUG [WriterThread-1] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000002.temp region=f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:25,817 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:25,817 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:25,823 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:25,832 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-6132376319701602085_1037{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:25,833 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-6132376319701602085_1037{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:25,836 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000002.temp (wrote 17 edits in 16ms) 2013-07-16 17:14:25,839 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000002.temp to hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000018 2013-07-16 17:14:25,839 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(601): Processed 17 edits across 1 regions; log file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 is corrupted = false progress failed = false 2013-07-16 17:14:25,843 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,843 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,843 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(462): successfully transitioned task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,843 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 done with task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 in 82ms 2013-07-16 17:14:25,845 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,848 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 acquired task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,850 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855, length=20361 2013-07-16 17:14:25,850 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:25,852 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,854 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 2013-07-16 17:14:25,854 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 to hdfs://localhost:56710/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994851781 2013-07-16 17:14:25,855 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,855 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,855 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 after 1ms 2013-07-16 17:14:25,857 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:25,861 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,862 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994851781 2013-07-16 17:14:25,862 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 acquired by ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,862 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20353, fileLength: 20361, trailerPresent: true 2013-07-16 17:14:25,870 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:25,870 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:25,872 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:25,895 DEBUG [WriterThread-0] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000040.temp region=f3fce37071716f89a509124ef3fd1288 2013-07-16 17:14:25,896 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:25,896 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:25,897 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:25,907 INFO [IPC Server handler 9 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-5269337808487317269_1039{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:25,909 INFO [IPC Server handler 0 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-5269337808487317269_1039 size 20361 2013-07-16 17:14:25,910 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000040.temp (wrote 23 edits in 18ms) 2013-07-16 17:14:25,914 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000040.temp to hdfs://localhost:56710/user/ec2-user/hbase/test/f3fce37071716f89a509124ef3fd1288/recovered.edits/0000000000000000062 2013-07-16 17:14:25,914 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(601): Processed 23 edits across 1 regions; log file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 is corrupted = false progress failed = false 2013-07-16 17:14:25,916 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,917 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(462): successfully transitioned task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,917 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 done with task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 in 69ms 2013-07-16 17:14:25,917 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,918 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,923 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 acquired task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:25,925 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658, length=0 2013-07-16 17:14:25,925 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:25,926 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 to hdfs://localhost:56710/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994857855 2013-07-16 17:14:25,927 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:25,927 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,928 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:25,930 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,931 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994857855 2013-07-16 17:14:25,932 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 acquired by ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:25,932 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(831): File hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 might be still open, length is 0 2013-07-16 17:14:25,932 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 2013-07-16 17:14:25,932 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/splitlog 2013-07-16 17:14:25,932 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:25,933 INFO [IPC Server handler 2 on 56710] blockmanagement.BlockInfoUnderConstruction(248): BLOCK* blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} recovery started, primary=127.0.0.1:47006 2013-07-16 17:14:25,933 WARN [IPC Server handler 2 on 56710] namenode.FSNamesystem(3135): DIR* NameSystem.internalReleaseLease: File /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 has not been closed. Lease recovery is in progress. RecoveryId = 1040 for block blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} 2013-07-16 17:14:25,934 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=false, attempt=0 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 after 2ms 2013-07-16 17:14:26,075 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:26,082 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:26,085 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:26,433 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:26,436 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 2 unassigned = 1 2013-07-16 17:14:26,443 DEBUG [RpcServer.handler=1,port=50669] master.ServerManager(336): Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server 2013-07-16 17:14:26,449 FATAL [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1752): ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) org.apache.hadoop.hbase.exceptions.YouAreDeadException: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1001) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:839) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.YouAreDeadException): org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerReport(RegionServerStatusProtos.java:4095) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:999) ... 17 more 2013-07-16 17:14:26,450 FATAL [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1760): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:14:26,451 ERROR [RpcServer.handler=0,port=50669] master.HMaster(1283): Region server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 reported a fatal error: ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) Cause: org.apache.hadoop.hbase.exceptions.YouAreDeadException: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1001) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:839) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.YouAreDeadException): org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerReport(RegionServerStatusProtos.java:4095) at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:999) ... 17 more 2013-07-16 17:14:26,452 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1685): STOPPED: org.apache.hadoop.hbase.exceptions.YouAreDeadException: Server REPORT rejected; currently processing ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 as dead server at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:337) at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:252) at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:1264) at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:3800) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:26,454 INFO [RS:1;ip-10-197-55-49:39939] regionserver.SplitLogWorker(596): Sending interrupt to stop the worker thread 2013-07-16 17:14:26,455 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] regionserver.SplitLogWorker(281): SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 2013-07-16 17:14:26,455 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314] regionserver.SplitLogWorker(205): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 exiting 2013-07-16 17:14:26,455 INFO [Thread-358] regionserver.MemStoreFlusher$FlushHandler(267): Thread-358 exiting 2013-07-16 17:14:26,455 INFO [RS:1;ip-10-197-55-49:39939] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager abruptly. 2013-07-16 17:14:26,455 INFO [RS:1;ip-10-197-55-49:39939.compactionChecker] hbase.Chore(93): RS:1;ip-10-197-55-49:39939.compactionChecker exiting 2013-07-16 17:14:26,455 INFO [RS:1;ip-10-197-55-49:39939.logRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:14:26,458 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(905): aborting server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:26,458 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:39939-0] handler.CloseRegionHandler(125): Processing close of test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:26,458 DEBUG [RS:1;ip-10-197-55-49:39939] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@1c344a47 2013-07-16 17:14:26,458 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(965): Closing test,,1373994855276.f3fce37071716f89a509124ef3fd1288.: disabling compactions & flushes 2013-07-16 17:14:26,458 INFO [RS:1;ip-10-197-55-49:39939] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0016 2013-07-16 17:14:26,458 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(987): Updates disabled for region test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:26,459 INFO [StoreCloserThread-test,,1373994855276.f3fce37071716f89a509124ef3fd1288.-1] regionserver.HStore(661): Closed f 2013-07-16 17:14:26,460 INFO [StoreCloserThread-test,,1373994855276.f3fce37071716f89a509124ef3fd1288.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:14:26,460 INFO [RS:1;ip-10-197-55-49:39939] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager abruptly. 2013-07-16 17:14:26,461 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1076): Waiting on 1 regions to close 2013-07-16 17:14:26,461 DEBUG [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1080): {f3fce37071716f89a509124ef3fd1288=test,,1373994855276.f3fce37071716f89a509124ef3fd1288.} 2013-07-16 17:14:26,461 INFO [RS_CLOSE_REGION-ip-10-197-55-49:39939-0] regionserver.HRegion(1045): Closed test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:26,461 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:39939-0] handler.CloseRegionHandler(177): Closed region test,,1373994855276.f3fce37071716f89a509124ef3fd1288. 2013-07-16 17:14:26,502 INFO [RS:1;ip-10-197-55-49:39939.periodicFlusher] hbase.Chore(93): RS:1;ip-10-197-55-49:39939.periodicFlusher exiting 2013-07-16 17:14:26,504 INFO [RS:1;ip-10-197-55-49:39939.leaseChecker] regionserver.Leases(124): RS:1;ip-10-197-55-49:39939.leaseChecker closing leases 2013-07-16 17:14:26,505 INFO [RS:1;ip-10-197-55-49:39939.leaseChecker] regionserver.Leases(131): RS:1;ip-10-197-55-49:39939.leaseChecker closed leases 2013-07-16 17:14:26,550 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49955-0x13fe879789b0005 Received ZooKeeper Event, type=None, state=Expired, path=null 2013-07-16 17:14:26,551 FATAL [RS:1;ip-10-197-55-49:49955-EventThread] regionserver.HRegionServer(1752): ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790: regionserver:49955-0x13fe879789b0005 regionserver:49955-0x13fe879789b0005 received expired from ZooKeeper, aborting org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:398) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:316) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:14:26,551 FATAL [RS:1;ip-10-197-55-49:49955-EventThread] regionserver.HRegionServer(1760): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:14:26,551 INFO [RS:1;ip-10-197-55-49:49955-EventThread] regionserver.HRegionServer(1685): STOPPED: regionserver:49955-0x13fe879789b0005 regionserver:49955-0x13fe879789b0005 received expired from ZooKeeper, aborting 2013-07-16 17:14:26,651 WARN [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:26,651 ERROR [RS:1;ip-10-197-55-49:49955] zookeeper.RecoverableZooKeeper(240): ZooKeeper delete failed after 1 retries 2013-07-16 17:14:26,651 WARN [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(958): Failed deleting my ephemeral node org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:153) at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1268) at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1257) at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1220) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:956) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:26,652 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(964): stopping server ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790; zookeeper connection closed. 2013-07-16 17:14:26,652 INFO [RS:1;ip-10-197-55-49:49955] regionserver.HRegionServer(967): RS:1;ip-10-197-55-49:49955 exiting 2013-07-16 17:14:26,652 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@79f5910e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@79f5910e 2013-07-16 17:14:26,661 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(935): stopping server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314; all regions closed. 2013-07-16 17:14:26,662 INFO [RS:1;ip-10-197-55-49:39939.logSyncer] wal.FSHLog$LogSyncer(966): RS:1;ip-10-197-55-49:39939.logSyncer exiting 2013-07-16 17:14:26,662 DEBUG [RS:1;ip-10-197-55-49:39939] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:26,666 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:26,666 ERROR [IPC Server handler 6 on 56710] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.3 (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658: File does not exist. Holder DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314_-332928389_559 does not have any open files. 2013-07-16 17:14:26,666 INFO [IPC Server handler 4 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=0, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:26,667 ERROR [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(1128): Close and delete failed org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658: File does not exist. Holder DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314_-332928389_559 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2398) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2390) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2455) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2432) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:546) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:389) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40748) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:97) at org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:49) at org.apache.hadoop.hbase.regionserver.HRegionServer.closeWAL(HRegionServer.java:1128) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:941) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:26,768 INFO [RS:1;ip-10-197-55-49:39939] regionserver.Leases(124): RS:1;ip-10-197-55-49:39939 closing leases 2013-07-16 17:14:26,768 INFO [RS:1;ip-10-197-55-49:39939] regionserver.Leases(131): RS:1;ip-10-197-55-49:39939 closed leases 2013-07-16 17:14:26,768 INFO [RS:1;ip-10-197-55-49:39939] regionserver.CompactSplitThread(356): Waiting for Split Thread to finish... 2013-07-16 17:14:26,769 INFO [RS:1;ip-10-197-55-49:39939] regionserver.CompactSplitThread(356): Waiting for Merge Thread to finish... 2013-07-16 17:14:26,769 INFO [RS:1;ip-10-197-55-49:39939] regionserver.CompactSplitThread(356): Waiting for Large Compaction Thread to finish... 2013-07-16 17:14:26,769 INFO [RS:1;ip-10-197-55-49:39939] regionserver.CompactSplitThread(356): Waiting for Small Compaction Thread to finish... 2013-07-16 17:14:26,878 INFO [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl(152): Moving ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790's hlogs to my queue 2013-07-16 17:14:26,892 DEBUG [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl(379): Creating ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 with data 20950 2013-07-16 17:14:26,896 DEBUG [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl(379): Creating ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994861684 with data 20587 2013-07-16 17:14:26,903 DEBUG [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl(379): Creating ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 with data 0 2013-07-16 17:14:26,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 4281, total replicated edits: 1992 2013-07-16 17:14:26,923 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/lock 2013-07-16 17:14:26,929 DEBUG [ReplicationExecutor-0] replication.ReplicationQueueInfo(109): Found dead servers:[ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] 2013-07-16 17:14:26,932 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:14:26,932 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:26,935 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(250): Replicating 9bb659c2-f860-4340-b5f5-0571795e3364 -> 2a81acba-2c55-4568-ac13-a15ee9cb847a 2013-07-16 17:14:26,939 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20587, fileLength: 20595, trailerPresent: true 2013-07-16 17:14:26,958 ERROR [IPC Server handler 0 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:26,960 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(497): NB dead servers : 1 2013-07-16 17:14:26,962 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:26,963 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:26,965 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(510): Log hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 still exists at hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:26,965 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 1 2013-07-16 17:14:27,066 ERROR [IPC Server handler 5 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,070 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(497): NB dead servers : 1 2013-07-16 17:14:27,070 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,071 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,072 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(510): Log hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 still exists at hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,072 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 2 2013-07-16 17:14:27,083 DEBUG [RS:1;ip-10-197-55-49:39939-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:39939-0x13fe879789b0013 Received ZooKeeper Event, type=None, state=Expired, path=null 2013-07-16 17:14:27,083 FATAL [RS:1;ip-10-197-55-49:39939-EventThread] regionserver.HRegionServer(1752): ABORTING region server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314: regionserver:39939-0x13fe879789b0013 regionserver:39939-0x13fe879789b0013 received expired from ZooKeeper, aborting org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:398) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:316) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:14:27,084 FATAL [RS:1;ip-10-197-55-49:39939-EventThread] regionserver.HRegionServer(1760): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:14:27,084 INFO [RS:1;ip-10-197-55-49:39939-EventThread] regionserver.HRegionServer(1685): STOPPED: regionserver:39939-0x13fe879789b0013 regionserver:39939-0x13fe879789b0013 received expired from ZooKeeper, aborting 2013-07-16 17:14:27,087 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:27,091 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:27,093 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:27,183 WARN [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:27,184 INFO [RS:1;ip-10-197-55-49:39939] util.RetryCounter(54): Sleeping 20ms before retry #1... 2013-07-16 17:14:27,204 WARN [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:27,204 ERROR [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(240): ZooKeeper getChildren failed after 1 retries 2013-07-16 17:14:27,204 INFO [RS:1;ip-10-197-55-49:39939] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0019 2013-07-16 17:14:27,210 WARN [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:27,210 INFO [RS:1;ip-10-197-55-49:39939] util.RetryCounter(54): Sleeping 20ms before retry #1... 2013-07-16 17:14:27,231 WARN [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:27,231 ERROR [RS:1;ip-10-197-55-49:39939] zookeeper.RecoverableZooKeeper(240): ZooKeeper delete failed after 1 retries 2013-07-16 17:14:27,231 WARN [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(958): Failed deleting my ephemeral node org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:153) at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1268) at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1257) at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1220) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:956) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1458) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41) at org.apache.hadoop.hbase.security.User.call(User.java:420) at org.apache.hadoop.hbase.security.User.access$300(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:27,231 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(964): stopping server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314; zookeeper connection closed. 2013-07-16 17:14:27,231 INFO [RS:1;ip-10-197-55-49:39939] regionserver.HRegionServer(967): RS:1;ip-10-197-55-49:39939 exiting 2013-07-16 17:14:27,232 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@420e54f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@420e54f3 2013-07-16 17:14:27,273 ERROR [IPC Server handler 0 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,275 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(497): NB dead servers : 1 2013-07-16 17:14:27,275 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,276 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,277 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(510): Log hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 still exists at hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,277 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 3 2013-07-16 17:14:27,434 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:27,436 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 2 unassigned = 1 2013-07-16 17:14:27,578 ERROR [IPC Server handler 2 on 43175] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,580 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(497): NB dead servers : 1 2013-07-16 17:14:27,580 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,581 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(507): Possible location hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,582 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(510): Log hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 still exists at hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,582 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Unable to open a reader, sleeping 100 times 4 2013-07-16 17:14:27,765 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=1 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 after 4006ms 2013-07-16 17:14:27,767 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:27,768 ERROR [IPC Server handler 5 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:27,770 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:27,773 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 14315, fileLength: 14323, trailerPresent: true 2013-07-16 17:14:27,778 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:27,778 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:27,779 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:27,789 DEBUG [WriterThread-1] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001054.temp region=55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:27,794 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:27,794 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:27,801 DEBUG [WriterThread-0] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001176.temp region=ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:27,802 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:27,819 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3985979304715098421_1107{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:27,820 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3985979304715098421_1107{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:27,821 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6459399492157285553_1106{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:27,822 INFO [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001176.temp (wrote 23 edits in 10ms) 2013-07-16 17:14:27,822 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6459399492157285553_1106{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:27,826 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001054.temp (wrote 123 edits in 11ms) 2013-07-16 17:14:27,826 INFO [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl(152): Moving ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314's hlogs to my queue 2013-07-16 17:14:27,829 DEBUG [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001176.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001199 2013-07-16 17:14:27,832 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001054.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001177 2013-07-16 17:14:27,833 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 146 edits across 2 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 is corrupted = false progress failed = false 2013-07-16 17:14:27,834 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:27,835 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:27,836 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,836 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 in 4085ms 2013-07-16 17:14:27,837 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,842 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(344): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 acquired task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,844 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(507): Splitting hlog: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136, length=20958 2013-07-16 17:14:27,844 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/replication/rs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314/lock 2013-07-16 17:14:27,844 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(508): DistributedLogReplay = false 2013-07-16 17:14:27,846 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,847 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(86): Recovering lease on dfs file hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,849 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] util.FSHDFSUtils(156): recoverLease=true, attempt=0 on file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 after 2ms 2013-07-16 17:14:27,851 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:27,852 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862604 2013-07-16 17:14:27,852 ERROR [IPC Server handler 6 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:27,855 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:27,856 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,856 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:929) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:837) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 16 more 2013-07-16 17:14:27,860 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:27,860 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862604 2013-07-16 17:14:27,860 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(801): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 acquired by ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,861 DEBUG [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20950, fileLength: 20958, trailerPresent: true 2013-07-16 17:14:27,866 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-0,5,main]: starting 2013-07-16 17:14:27,875 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-1,5,main]: starting 2013-07-16 17:14:27,880 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(1101): Writer thread Thread[WriterThread-2,5,main]: starting 2013-07-16 17:14:27,911 DEBUG [WriterThread-2] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000841.temp region=2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:27,916 DEBUG [WriterThread-0] wal.HLogSplitter$LogRecoveredEditsOutputSink(1529): Creating writer path=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000000942.temp region=55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:27,932 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:27,932 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1257): Waiting for split writer threads to finish 2013-07-16 17:14:27,934 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter$OutputSink(1275): Split writers finished 2013-07-16 17:14:27,959 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5607632630545867114_1111{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:27,961 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5607632630545867114_1111{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:27,964 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_897260825144917383_1110{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:27,964 INFO [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000841.temp (wrote 101 edits in 18ms) 2013-07-16 17:14:27,964 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_897260825144917383_1110{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:27,967 INFO [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1371): Closed path hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000000942.temp (wrote 112 edits in 35ms) 2013-07-16 17:14:27,969 DEBUG [split-log-closeStream-1] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000841.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000941 2013-07-16 17:14:27,971 DEBUG [split-log-closeStream-2] wal.HLogSplitter$LogRecoveredEditsOutputSink$2(1402): Rename hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000000942.temp to hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001053 2013-07-16 17:14:27,971 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] wal.HLogSplitter(601): Processed 213 edits across 2 regions; log file=hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 is corrupted = false progress failed = false 2013-07-16 17:14:27,973 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,973 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,974 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(462): successfully transitioned task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 to final state DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,974 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 done with task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 in 132ms 2013-07-16 17:14:27,975 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,982 DEBUG [pool-1-thread-1-EventThread] wal.HLogSplitter(691): Archived processed log hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 to hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 2013-07-16 17:14:27,983 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(649): Done splitting /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,985 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/splitlog 2013-07-16 17:14:27,985 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] regionserver.SplitLogWorker(583): tasks arrived or departed 2013-07-16 17:14:27,987 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,988 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20950, fileLength: 20958, trailerPresent: true 2013-07-16 17:14:27,989 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.SplitLogManager(364): finished splitting (more than or equal to) 103174 bytes in 6 log files in [hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790-splitting] in 4895ms 2013-07-16 17:14:27,989 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] handler.ServerShutdownHandler(206): Reassigning 13 region(s) that ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 was carrying (and 0 regions(s) that were opening on this server) 2013-07-16 17:14:27,990 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(1503): Assigning 13 region(s) to ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,990 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 64c33257daeacd0fe5bf6a175319eadb with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,990 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=OPEN, ts=1373994854146, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994867990, server=null} 2013-07-16 17:14:27,990 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 64c33257daeacd0fe5bf6a175319eadb with OFFLINE state 2013-07-16 17:14:27,990 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region f4cfa4d251af617b31eb11c76cc68678 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,991 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=OPEN, ts=1373994854521, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994867991, server=null} 2013-07-16 17:14:27,991 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for f4cfa4d251af617b31eb11c76cc68678 with OFFLINE state 2013-07-16 17:14:27,991 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region d3ed59de1135ee985829ee3cbad0cee2 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,991 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=OPEN, ts=1373994854156, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994867991, server=null} 2013-07-16 17:14:27,991 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for d3ed59de1135ee985829ee3cbad0cee2 with OFFLINE state 2013-07-16 17:14:27,991 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /1/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C49955%252C1373994846790.1373994862136 2013-07-16 17:14:27,991 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 2fd443c241020be67cc0d08d473f5134 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,992 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=OPEN, ts=1373994854527, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994867992, server=null} 2013-07-16 17:14:27,992 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 2fd443c241020be67cc0d08d473f5134 with OFFLINE state 2013-07-16 17:14:27,992 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 55d7e62280245f719c8f2cc61c586c64 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,992 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=OPEN, ts=1373994854529, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994867992, server=null} 2013-07-16 17:14:27,992 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 55d7e62280245f719c8f2cc61c586c64 with OFFLINE state 2013-07-16 17:14:27,992 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region ba6e592748955d732d7843b9603163dc with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,992 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=OPEN, ts=1373994854517, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994867992, server=null} 2013-07-16 17:14:27,992 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for ba6e592748955d732d7843b9603163dc with OFFLINE state 2013-07-16 17:14:27,993 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 23b3aa990a7ac4e12882f9d3eca30eea with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,993 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=OPEN, ts=1373994854265, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994867993, server=null} 2013-07-16 17:14:27,993 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 23b3aa990a7ac4e12882f9d3eca30eea with OFFLINE state 2013-07-16 17:14:27,993 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 093d3ef494905701450f33a487333200 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,993 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=OPEN, ts=1373994854314, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994867993, server=null} 2013-07-16 17:14:27,994 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 093d3ef494905701450f33a487333200 with OFFLINE state 2013-07-16 17:14:27,994 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:27,994 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994867990, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,994 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 8316cb643e8db1f47659c2704a5d85bd with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,994 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=OPEN, ts=1373994854323, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994867994, server=null} 2013-07-16 17:14:27,995 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 8316cb643e8db1f47659c2704a5d85bd with OFFLINE state 2013-07-16 17:14:27,995 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region c4611b71a935e3b170cd961ded7d0820 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,995 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=OPEN, ts=1373994854367, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994867995, server=null} 2013-07-16 17:14:27,995 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for c4611b71a935e3b170cd961ded7d0820 with OFFLINE state 2013-07-16 17:14:27,996 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 4ac8676e6af9c1c25f2f2a90ed99d3ae with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,996 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPEN, ts=1373994854440, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994867996, server=null} 2013-07-16 17:14:27,996 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994867991, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,996 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 4ac8676e6af9c1c25f2f2a90ed99d3ae with OFFLINE state 2013-07-16 17:14:27,997 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region baee7b76d51e7196ee3121edc50bda59 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,997 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994867991, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,997 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=OPEN, ts=1373994854463, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994867997, server=null} 2013-07-16 17:14:27,998 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for baee7b76d51e7196ee3121edc50bda59 with OFFLINE state 2013-07-16 17:14:27,998 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(327): Offline a region 6ca2c5a98917cab87c982b4bbb7e0115 with current state=OPEN, expected state=OFFLINE, assigned to server: ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790, expected null 2013-07-16 17:14:27,998 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:27,998 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=OPEN, ts=1373994854505, server=ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790} to {6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994867998, server=null} 2013-07-16 17:14:27,998 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(177): master:50904-0x13fe879789b0004 Async create of unassigned node for 6ca2c5a98917cab87c982b4bbb7e0115 with OFFLINE state 2013-07-16 17:14:27,999 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,000 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,001 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994867993, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,002 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994867993, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,003 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994867990, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,004 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:28,004 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=1 of total=13 2013-07-16 17:14:28,004 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994867994, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,005 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994867995, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,005 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994867996, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,006 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994867991, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,007 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994867991, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,007 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994867997, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,007 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994867998, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,008 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,009 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,009 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994867992, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,009 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994867993, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,009 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=7 of total=13 2013-07-16 17:14:28,010 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994867993, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,011 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994867994, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,011 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994867995, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,011 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994867996, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,012 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994867997, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,012 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994867998, server=null}, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(1539): ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 unassigned znodes=13 of total=13 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=OFFLINE, ts=1373994867990, server=null} to {64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=OFFLINE, ts=1373994867991, server=null} to {f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=OFFLINE, ts=1373994867991, server=null} to {d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=OFFLINE, ts=1373994867992, server=null} to {2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,015 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=OFFLINE, ts=1373994867992, server=null} to {55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=OFFLINE, ts=1373994867992, server=null} to {ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=OFFLINE, ts=1373994867993, server=null} to {23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=OFFLINE, ts=1373994867994, server=null} to {093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=OFFLINE, ts=1373994867994, server=null} to {8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=OFFLINE, ts=1373994867995, server=null} to {c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OFFLINE, ts=1373994867996, server=null} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=OFFLINE, ts=1373994867998, server=null} to {baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,016 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=OFFLINE, ts=1373994867998, server=null} to {6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,019 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:28,027 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:28,027 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 64c33257daeacd0fe5bf6a175319eadb from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,029 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node f4cfa4d251af617b31eb11c76cc68678 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,031 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:28,033 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:28,033 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:28,036 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 64c33257daeacd0fe5bf6a175319eadb from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,036 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => 64c33257daeacd0fe5bf6a175319eadb, NAME => 'test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.', STARTKEY => '', ENDKEY => 'bbb'} 2013-07-16 17:14:28,036 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:28,037 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:28,037 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=64c33257daeacd0fe5bf6a175319eadb, current state from region state map ={64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,038 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=f4cfa4d251af617b31eb11c76cc68678, current state from region state map ={f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,038 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node f4cfa4d251af617b31eb11c76cc68678 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,038 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => f4cfa4d251af617b31eb11c76cc68678, NAME => 'test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.', STARTKEY => 'ccc', ENDKEY => 'ddd'} 2013-07-16 17:14:28,039 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:28,039 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:28,042 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d3ed59de1135ee985829ee3cbad0cee2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,042 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:28,045 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:28,046 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:28,046 INFO [StoreOpener-64c33257daeacd0fe5bf6a175319eadb-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,047 DEBUG [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d3ed59de1135ee985829ee3cbad0cee2, current state from region state map ={d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,047 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d3ed59de1135ee985829ee3cbad0cee2 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:28,048 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => d3ed59de1135ee985829ee3cbad0cee2, NAME => 'test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.', STARTKEY => 'ggg', ENDKEY => 'hhh'} 2013-07-16 17:14:28,048 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:28,048 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:28,050 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:28,051 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:28,051 INFO [StoreOpener-64c33257daeacd0fe5bf6a175319eadb-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,052 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:28,053 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:28,055 INFO [StoreOpener-f4cfa4d251af617b31eb11c76cc68678-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,055 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:28,057 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:28,060 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:28,061 INFO [StoreOpener-d3ed59de1135ee985829ee3cbad0cee2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,062 INFO [RpcServer.handler=0,port=49041] regionserver.HRegionServer(3455): Open test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:28,064 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000210 2013-07-16 17:14:28,064 DEBUG [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] master.AssignmentManager(1661): Bulk assigning done for ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:28,064 INFO [AM.ZK.Worker-pool-2-thread-2] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994868064, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,065 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,065 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:28,065 INFO [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0] handler.ServerShutdownHandler(301): Finished processing of shutdown of ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:28,068 INFO [StoreOpener-f4cfa4d251af617b31eb11c76cc68678-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,069 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20400, fileLength: 20408, trailerPresent: true 2013-07-16 17:14:28,085 INFO [StoreOpener-d3ed59de1135ee985829ee3cbad0cee2-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:28,092 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000630 2013-07-16 17:14:28,097 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:28,099 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 15603, fileLength: 15611, trailerPresent: true 2013-07-16 17:14:28,105 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000422 2013-07-16 17:14:28,108 ERROR [IPC Server handler 8 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:28,125 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:28,127 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:28,129 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2948) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2886) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 23 more 2013-07-16 17:14:28,129 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:28,133 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(3085): Applied 627, skipped 0, firstSequenceidInLog=2, maxSequenceidInLog=210, path=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000210 2013-07-16 17:14:28,133 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 18226, fileLength: 18234, trailerPresent: true 2013-07-16 17:14:28,138 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000236 2013-07-16 17:14:28,141 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 2546, fileLength: 2554, trailerPresent: true 2013-07-16 17:14:28,147 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:28,149 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3a44461b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:28,156 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:28,157 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3a44461b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:28,158 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3a44461b-0x13fe879789b001f connected 2013-07-16 17:14:28,159 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(3085): Applied 76, skipped 0, firstSequenceidInLog=211, maxSequenceidInLog=236, path=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000236 2013-07-16 17:14:28,159 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1492): Started memstore flush for test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb., current region memstore size 115.3 K; wal is null, using passed sequenceid=236 2013-07-16 17:14:28,160 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(3085): Applied 476, skipped 0, firstSequenceidInLog=472, maxSequenceidInLog=630, path=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000630 2013-07-16 17:14:28,163 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000707 2013-07-16 17:14:28,181 ERROR [IPC Server handler 9 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:28,182 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2948) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2886) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 23 more 2013-07-16 17:14:28,198 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 7469, fileLength: 7477, trailerPresent: true 2013-07-16 17:14:28,200 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:28,217 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:28,217 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b001f 2013-07-16 17:14:28,228 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:28,228 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:28,231 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:28,233 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:28,235 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(3085): Applied 556, skipped 0, firstSequenceidInLog=237, maxSequenceidInLog=422, path=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000422 2013-07-16 17:14:28,236 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(3085): Applied 227, skipped 0, firstSequenceidInLog=631, maxSequenceidInLog=707, path=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000707 2013-07-16 17:14:28,237 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000471 2013-07-16 17:14:28,237 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2., current region memstore size 115.3 K; wal is null, using passed sequenceid=707 2013-07-16 17:14:28,241 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 4846, fileLength: 4854, trailerPresent: true 2013-07-16 17:14:30,336 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:30,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:30,338 INFO [IPC Server handler 8 on 56710] blockmanagement.BlockInfoUnderConstruction(248): BLOCK* blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} recovery started, primary=127.0.0.1:51438 2013-07-16 17:14:30,339 WARN [IPC Server handler 8 on 56710] namenode.FSNamesystem(3135): DIR* NameSystem.internalReleaseLease: File /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 has not been closed. Lease recovery is in progress. RecoveryId = 1041 for block blk_4297992342878601848_1031{blockUCState=UNDER_RECOVERY, primaryNodeIndex=1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} 2013-07-16 17:14:30,341 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] util.FSHDFSUtils(156): recoverLease=false, attempt=1 on file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 after 4409ms 2013-07-16 17:14:30,342 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:30,346 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:30,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(3085): Applied 147, skipped 0, firstSequenceidInLog=423, maxSequenceidInLog=471, path=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000471 2013-07-16 17:14:30,353 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1492): Started memstore flush for test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678., current region memstore size 115.3 K; wal is null, using passed sequenceid=471 2013-07-16 17:14:30,364 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:30,365 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:30,368 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:30,368 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:30,397 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6b4f7392 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:30,397 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6b4f7392 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:30,400 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6b4f7392-0x13fe879789b0020 connected 2013-07-16 17:14:30,563 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:30,563 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0020 2013-07-16 17:14:30,564 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_2434326968101444931_1115{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,565 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_2434326968101444931_1115{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,580 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:30,581 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3329200572112482561_1116{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,581 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:30,580 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=236, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/.tmp/7178d7bf728c48c18b8b2e5d7c172949 2013-07-16 17:14:30,583 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3329200572112482561_1116{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,583 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #3/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:30,585 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=707, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/.tmp/fc82b7cb0c75460a85a0bcbe46a5cd8a 2013-07-16 17:14:30,586 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: ConnectException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:30,590 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: ConnectException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:30,594 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6189489605848558305_1117{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,595 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6189489605848558305_1117{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,599 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=471, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/.tmp/1789151c3c264af086c9cd11aae4f086 2013-07-16 17:14:30,602 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:30,604 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 31 more 2013-07-16 17:14:30,628 ERROR [IPC Server handler 2 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:30,631 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x49a3a85d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:30,649 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x49a3a85d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:30,649 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 31 more 2013-07-16 17:14:30,650 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x49a3a85d-0x13fe879789b0021 connected 2013-07-16 17:14:30,662 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/.tmp/1789151c3c264af086c9cd11aae4f086 as hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/f/1789151c3c264af086c9cd11aae4f086 2013-07-16 17:14:30,663 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/.tmp/fc82b7cb0c75460a85a0bcbe46a5cd8a as hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/f/fc82b7cb0c75460a85a0bcbe46a5cd8a 2013-07-16 17:14:30,666 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/.tmp/7178d7bf728c48c18b8b2e5d7c172949 as hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/f/7178d7bf728c48c18b8b2e5d7c172949 2013-07-16 17:14:30,684 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/f/1789151c3c264af086c9cd11aae4f086, entries=703, sequenceid=471, filesize=21.2 K 2013-07-16 17:14:30,685 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. in 332ms, sequenceid=471, compaction requested=false; wal=null 2013-07-16 17:14:30,688 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/f/fc82b7cb0c75460a85a0bcbe46a5cd8a, entries=703, sequenceid=707, filesize=21.2 K 2013-07-16 17:14:30,688 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. in 2451ms, sequenceid=707, compaction requested=false; wal=null 2013-07-16 17:14:30,704 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:30,704 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-3061601612102510821_1091 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:30,704 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0021 2013-07-16 17:14:30,705 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_293337223029028366_1096 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:14:30,705 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000630 2013-07-16 17:14:30,705 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000422 2013-07-16 17:14:30,707 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-7590496273978834333_1102 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:30,708 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/d3ed59de1135ee985829ee3cbad0cee2/recovered.edits/0000000000000000707 2013-07-16 17:14:30,711 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-8337894913240143259_1092 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:30,715 ERROR [IPC Server handler 3 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:30,716 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 32 more 2013-07-16 17:14:30,717 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/f4cfa4d251af617b31eb11c76cc68678/recovered.edits/0000000000000000471 2013-07-16 17:14:30,720 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined d3ed59de1135ee985829ee3cbad0cee2; next sequenceid=708 2013-07-16 17:14:30,720 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:30,721 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/f/7178d7bf728c48c18b8b2e5d7c172949, entries=703, sequenceid=236, filesize=21.2 K 2013-07-16 17:14:30,721 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. in 2562ms, sequenceid=236, compaction requested=false; wal=null 2013-07-16 17:14:30,723 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined f4cfa4d251af617b31eb11c76cc68678; next sequenceid=472 2013-07-16 17:14:30,723 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:30,723 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-5219464004133508689_1098 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:30,728 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:30,733 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:30,741 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] catalog.MetaEditor(432): Updated row test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,741 INFO [PostOpenDeployTasks:d3ed59de1135ee985829ee3cbad0cee2] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:14:30,741 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node d3ed59de1135ee985829ee3cbad0cee2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,745 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:30,746 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node d3ed59de1135ee985829ee3cbad0cee2 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,746 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => d3ed59de1135ee985829ee3cbad0cee2, NAME => 'test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.', STARTKEY => 'ggg', ENDKEY => 'hhh'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,746 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,746 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 2fd443c241020be67cc0d08d473f5134 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,752 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] catalog.MetaEditor(432): Updated row test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,752 INFO [PostOpenDeployTasks:f4cfa4d251af617b31eb11c76cc68678] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:14:30,753 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node f4cfa4d251af617b31eb11c76cc68678 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,756 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=d3ed59de1135ee985829ee3cbad0cee2, current state from region state map ={d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994868064, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,756 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(265): Transitioned from {d3ed59de1135ee985829ee3cbad0cee2 state=OPENING, ts=1373994868064, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {d3ed59de1135ee985829ee3cbad0cee2 state=OPEN, ts=1373994870756, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,757 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for d3ed59de1135ee985829ee3cbad0cee2 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:30,757 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for d3ed59de1135ee985829ee3cbad0cee2 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,759 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:30,760 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node f4cfa4d251af617b31eb11c76cc68678 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,760 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => f4cfa4d251af617b31eb11c76cc68678, NAME => 'test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.', STARTKEY => 'ccc', ENDKEY => 'ddd'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,760 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,760 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 55d7e62280245f719c8f2cc61c586c64 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,760 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:30,761 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000210 2013-07-16 17:14:30,762 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 2fd443c241020be67cc0d08d473f5134 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,762 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 2fd443c241020be67cc0d08d473f5134, NAME => 'test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.', STARTKEY => 'hhh', ENDKEY => 'iii'} 2013-07-16 17:14:30,762 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=f4cfa4d251af617b31eb11c76cc68678, current state from region state map ={f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,762 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(265): Transitioned from {f4cfa4d251af617b31eb11c76cc68678 state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {f4cfa4d251af617b31eb11c76cc68678 state=OPEN, ts=1373994870762, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,762 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:30,763 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:30,763 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for f4cfa4d251af617b31eb11c76cc68678 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:30,763 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for f4cfa4d251af617b31eb11c76cc68678 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,763 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_7656196182600639896_1095 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:14:30,764 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/64c33257daeacd0fe5bf6a175319eadb/recovered.edits/0000000000000000236 2013-07-16 17:14:30,764 DEBUG [AM.ZK.Worker-pool-2-thread-11] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=2fd443c241020be67cc0d08d473f5134, current state from region state map ={2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,764 INFO [AM.ZK.Worker-pool-2-thread-11] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994870764, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,766 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/d3ed59de1135ee985829ee3cbad0cee2 2013-07-16 17:14:30,766 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined 64c33257daeacd0fe5bf6a175319eadb; next sequenceid=237 2013-07-16 17:14:30,766 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:30,766 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:30,766 DEBUG [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager$4(1218): The znode of test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. has been deleted, region state: {d3ed59de1135ee985829ee3cbad0cee2 state=OPEN, ts=1373994870756, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,767 INFO [AM.ZK.Worker-pool-2-thread-15] master.RegionStates(301): Onlined d3ed59de1135ee985829ee3cbad0cee2 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,767 INFO [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager$4(1223): The master has opened test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,767 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region d3ed59de1135ee985829ee3cbad0cee2 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,771 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:30,771 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 55d7e62280245f719c8f2cc61c586c64 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,772 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 55d7e62280245f719c8f2cc61c586c64, NAME => 'test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.', STARTKEY => 'iii', ENDKEY => 'jjj'} 2013-07-16 17:14:30,772 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/f4cfa4d251af617b31eb11c76cc68678 2013-07-16 17:14:30,772 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1218): The znode of test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. has been deleted, region state: {f4cfa4d251af617b31eb11c76cc68678 state=OPEN, ts=1373994870762, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,772 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:30,772 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(301): Onlined f4cfa4d251af617b31eb11c76cc68678 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,772 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:30,772 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:30,772 INFO [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager$4(1223): The master has opened test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,773 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region f4cfa4d251af617b31eb11c76cc68678 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,774 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=55d7e62280245f719c8f2cc61c586c64, current state from region state map ={55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,774 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=PENDING_OPEN, ts=1373994868015, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994870774, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,775 INFO [StoreOpener-2fd443c241020be67cc0d08d473f5134-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,780 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:30,782 INFO [StoreOpener-2fd443c241020be67cc0d08d473f5134-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,792 INFO [StoreOpener-55d7e62280245f719c8f2cc61c586c64-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,795 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000840 2013-07-16 17:14:30,797 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] catalog.MetaEditor(432): Updated row test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,797 INFO [PostOpenDeployTasks:64c33257daeacd0fe5bf6a175319eadb] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:14:30,797 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 64c33257daeacd0fe5bf6a175319eadb from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,800 INFO [StoreOpener-55d7e62280245f719c8f2cc61c586c64-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,801 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 13130, fileLength: 13138, trailerPresent: true 2013-07-16 17:14:30,801 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:30,801 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 64c33257daeacd0fe5bf6a175319eadb from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:30,801 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 64c33257daeacd0fe5bf6a175319eadb, NAME => 'test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.', STARTKEY => '', ENDKEY => 'bbb'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,801 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,801 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node ba6e592748955d732d7843b9603163dc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,806 DEBUG [AM.ZK.Worker-pool-2-thread-8] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=64c33257daeacd0fe5bf6a175319eadb, current state from region state map ={64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,808 INFO [AM.ZK.Worker-pool-2-thread-8] master.RegionStates(265): Transitioned from {64c33257daeacd0fe5bf6a175319eadb state=OPENING, ts=1373994868065, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {64c33257daeacd0fe5bf6a175319eadb state=OPEN, ts=1373994870808, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,808 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 64c33257daeacd0fe5bf6a175319eadb from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:30,809 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 64c33257daeacd0fe5bf6a175319eadb that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,816 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node ba6e592748955d732d7843b9603163dc from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:30,816 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => ba6e592748955d732d7843b9603163dc, NAME => 'test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.', STARTKEY => 'jjj', ENDKEY => 'kkk'} 2013-07-16 17:14:30,816 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:30,817 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:30,818 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:30,820 DEBUG [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=ba6e592748955d732d7843b9603163dc, current state from region state map ={ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,820 INFO [AM.ZK.Worker-pool-2-thread-6] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994870820, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,820 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/64c33257daeacd0fe5bf6a175319eadb 2013-07-16 17:14:30,820 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1218): The znode of test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. has been deleted, region state: {64c33257daeacd0fe5bf6a175319eadb state=OPEN, ts=1373994870808, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:30,821 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(301): Onlined 64c33257daeacd0fe5bf6a175319eadb on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,821 INFO [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1223): The master has opened test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:30,821 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 64c33257daeacd0fe5bf6a175319eadb in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:30,821 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:30,821 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001053 2013-07-16 17:14:30,824 ERROR [IPC Server handler 4 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:30,827 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1bf12809 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:30,827 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1bf12809 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:30,828 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:122) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2948) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2886) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 23 more 2013-07-16 17:14:30,828 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1bf12809-0x13fe879789b0022 connected 2013-07-16 17:14:30,830 INFO [StoreOpener-ba6e592748955d732d7843b9603163dc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,832 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 11020, fileLength: 11028, trailerPresent: true 2013-07-16 17:14:30,834 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:30,836 INFO [StoreOpener-ba6e592748955d732d7843b9603163dc-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:30,844 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(3085): Applied 400, skipped 0, firstSequenceidInLog=706, maxSequenceidInLog=840, path=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000840 2013-07-16 17:14:30,847 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000941 2013-07-16 17:14:30,852 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 9942, fileLength: 9950, trailerPresent: true 2013-07-16 17:14:30,856 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:30,869 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001199 2013-07-16 17:14:30,873 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:30,874 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0022 2013-07-16 17:14:30,877 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 2275, fileLength: 2283, trailerPresent: true 2013-07-16 17:14:30,883 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(3085): Applied 336, skipped 0, firstSequenceidInLog=942, maxSequenceidInLog=1053, path=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001053 2013-07-16 17:14:30,886 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2941): Replaying edits from hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001177 2013-07-16 17:14:30,887 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:30,889 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:30,891 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(3085): Applied 303, skipped 0, firstSequenceidInLog=841, maxSequenceidInLog=941, path=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000941 2013-07-16 17:14:30,892 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(3085): Applied 68, skipped 0, firstSequenceidInLog=1176, maxSequenceidInLog=1199, path=hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001199 2013-07-16 17:14:30,892 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134., current region memstore size 115.3 K; wal is null, using passed sequenceid=941 2013-07-16 17:14:30,892 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1492): Started memstore flush for test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc., current region memstore size 11.2 K; wal is null, using passed sequenceid=1199 2013-07-16 17:14:30,898 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 12052, fileLength: 12060, trailerPresent: true 2013-07-16 17:14:30,908 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:30,923 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:30,925 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(3085): Applied 367, skipped 0, firstSequenceidInLog=1054, maxSequenceidInLog=1177, path=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001177 2013-07-16 17:14:30,925 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:30,926 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1492): Started memstore flush for test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64., current region memstore size 115.3 K; wal is null, using passed sequenceid=1177 2013-07-16 17:14:30,930 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:30,985 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-7110683477746666650_1119{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,986 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-7110683477746666650_1119{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,992 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=1199, memsize=11.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/.tmp/5e16399f6f0643d8a919de3800c319db 2013-07-16 17:14:30,996 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3280970750906648128_1122{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:30,998 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3280970750906648128_1122{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:31,000 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=1177, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/.tmp/6ba1c121110f4d59bec6c1d634cddc70 2013-07-16 17:14:31,001 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6066437623958100217_1123{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:31,002 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/.tmp/5e16399f6f0643d8a919de3800c319db as hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/f/5e16399f6f0643d8a919de3800c319db 2013-07-16 17:14:31,004 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6066437623958100217_1123 size 21685 2013-07-16 17:14:31,004 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=941, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/.tmp/7f09627347874b078caedcdebfc6386f 2013-07-16 17:14:31,011 ERROR [IPC Server handler 5 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:31,013 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 31 more 2013-07-16 17:14:31,015 ERROR [IPC Server handler 6 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:31,039 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 31 more 2013-07-16 17:14:31,039 ERROR [IPC Server handler 7 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:31,044 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 32 more 2013-07-16 17:14:31,047 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/f/5e16399f6f0643d8a919de3800c319db, entries=68, sequenceid=1199, filesize=2.9 K 2013-07-16 17:14:31,047 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1636): Finished memstore flush of ~11.2 K/11424, currentsize=0/0 for region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. in 155ms, sequenceid=1199, compaction requested=false; wal=null 2013-07-16 17:14:31,051 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-3985979304715098421_1107 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:14:31,051 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/.tmp/6ba1c121110f4d59bec6c1d634cddc70 as hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/f/6ba1c121110f4d59bec6c1d634cddc70 2013-07-16 17:14:31,052 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/.tmp/7f09627347874b078caedcdebfc6386f as hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/f/7f09627347874b078caedcdebfc6386f 2013-07-16 17:14:31,052 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/recovered.edits/0000000000000001199 2013-07-16 17:14:31,057 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined ba6e592748955d732d7843b9603163dc; next sequenceid=1200 2013-07-16 17:14:31,057 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:31,060 ERROR [IPC Server handler 8 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:31,071 WARN [RS_OPEN_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2915) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:700) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:580) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4224) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4148) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4099) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:459) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:137) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 32 more 2013-07-16 17:14:31,071 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:31,074 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/f/6ba1c121110f4d59bec6c1d634cddc70, entries=703, sequenceid=1177, filesize=21.2 K 2013-07-16 17:14:31,075 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. in 149ms, sequenceid=1177, compaction requested=false; wal=null 2013-07-16 17:14:31,078 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_897260825144917383_1110 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:31,081 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/f/7f09627347874b078caedcdebfc6386f, entries=703, sequenceid=941, filesize=21.2 K 2013-07-16 17:14:31,082 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. in 190ms, sequenceid=941, compaction requested=false; wal=null 2013-07-16 17:14:31,085 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001053 2013-07-16 17:14:31,087 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_3049710619349538948_1101 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:14:31,087 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000840 2013-07-16 17:14:31,088 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-6459399492157285553_1106 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:14:31,088 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/55d7e62280245f719c8f2cc61c586c64/recovered.edits/0000000000000001177 2013-07-16 17:14:31,089 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-5607632630545867114_1111 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:14:31,089 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(2922): Deleted recovered.edits file=hdfs://localhost:43175/user/ec2-user/hbase/test/2fd443c241020be67cc0d08d473f5134/recovered.edits/0000000000000000941 2013-07-16 17:14:31,091 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5bbe4713 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:31,101 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 55d7e62280245f719c8f2cc61c586c64; next sequenceid=1178 2013-07-16 17:14:31,101 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:31,101 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5bbe4713 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:31,103 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5bbe4713-0x13fe879789b0023 connected 2013-07-16 17:14:31,104 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:31,104 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] catalog.MetaEditor(432): Updated row test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,112 INFO [PostOpenDeployTasks:ba6e592748955d732d7843b9603163dc] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:14:31,114 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node ba6e592748955d732d7843b9603163dc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,119 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:31,120 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node ba6e592748955d732d7843b9603163dc from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,120 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => ba6e592748955d732d7843b9603163dc, NAME => 'test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.', STARTKEY => 'jjj', ENDKEY => 'kkk'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,120 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,120 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 23b3aa990a7ac4e12882f9d3eca30eea from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,121 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=ba6e592748955d732d7843b9603163dc, current state from region state map ={ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994870820, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,122 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(265): Transitioned from {ba6e592748955d732d7843b9603163dc state=OPENING, ts=1373994870820, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {ba6e592748955d732d7843b9603163dc state=OPEN, ts=1373994871122, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,122 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for ba6e592748955d732d7843b9603163dc from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,122 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for ba6e592748955d732d7843b9603163dc that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,126 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] catalog.MetaEditor(432): Updated row test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,126 INFO [PostOpenDeployTasks:55d7e62280245f719c8f2cc61c586c64] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:14:31,126 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 55d7e62280245f719c8f2cc61c586c64 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,133 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 23b3aa990a7ac4e12882f9d3eca30eea from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,133 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => 23b3aa990a7ac4e12882f9d3eca30eea, NAME => 'test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.', STARTKEY => 'lll', ENDKEY => 'mmm'} 2013-07-16 17:14:31,134 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:31,134 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:31,135 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 2fd443c241020be67cc0d08d473f5134; next sequenceid=942 2013-07-16 17:14:31,136 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:31,141 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 55d7e62280245f719c8f2cc61c586c64 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,141 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 55d7e62280245f719c8f2cc61c586c64, NAME => 'test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.', STARTKEY => 'iii', ENDKEY => 'jjj'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,141 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,141 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 093d3ef494905701450f33a487333200 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,144 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:31,144 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1218): The znode of test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. has been deleted, region state: {ba6e592748955d732d7843b9603163dc state=OPEN, ts=1373994871122, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,144 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(301): Onlined ba6e592748955d732d7843b9603163dc on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,144 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,144 INFO [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1223): The master has opened test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,145 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region ba6e592748955d732d7843b9603163dc in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,145 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:31,146 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:31,147 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=23b3aa990a7ac4e12882f9d3eca30eea, current state from region state map ={23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,148 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994871148, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,149 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:31,150 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=55d7e62280245f719c8f2cc61c586c64, current state from region state map ={55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994870774, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,150 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(265): Transitioned from {55d7e62280245f719c8f2cc61c586c64 state=OPENING, ts=1373994870774, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {55d7e62280245f719c8f2cc61c586c64 state=OPEN, ts=1373994871150, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,150 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0023 2013-07-16 17:14:31,150 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for 55d7e62280245f719c8f2cc61c586c64 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,150 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 55d7e62280245f719c8f2cc61c586c64 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,162 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:31,170 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:31,170 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 093d3ef494905701450f33a487333200 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,171 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 093d3ef494905701450f33a487333200, NAME => 'test,nnn,1373994853026.093d3ef494905701450f33a487333200.', STARTKEY => 'nnn', ENDKEY => 'ooo'} 2013-07-16 17:14:31,171 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/55d7e62280245f719c8f2cc61c586c64 2013-07-16 17:14:31,171 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1218): The znode of test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. has been deleted, region state: {55d7e62280245f719c8f2cc61c586c64 state=OPEN, ts=1373994871150, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,171 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,171 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(301): Onlined 55d7e62280245f719c8f2cc61c586c64 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,172 INFO [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1223): The master has opened test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,171 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 093d3ef494905701450f33a487333200 2013-07-16 17:14:31,172 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:31,173 DEBUG [AM.ZK.Worker-pool-2-thread-2] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=093d3ef494905701450f33a487333200, current state from region state map ={093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,173 INFO [AM.ZK.Worker-pool-2-thread-2] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {093d3ef494905701450f33a487333200 state=OPENING, ts=1373994871173, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,173 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 55d7e62280245f719c8f2cc61c586c64 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,179 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] catalog.MetaEditor(432): Updated row test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,179 INFO [PostOpenDeployTasks:2fd443c241020be67cc0d08d473f5134] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:14:31,179 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 2fd443c241020be67cc0d08d473f5134 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,191 INFO [StoreOpener-23b3aa990a7ac4e12882f9d3eca30eea-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,193 INFO [StoreOpener-093d3ef494905701450f33a487333200-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,193 INFO [StoreOpener-23b3aa990a7ac4e12882f9d3eca30eea-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,195 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 2fd443c241020be67cc0d08d473f5134 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,195 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 2fd443c241020be67cc0d08d473f5134, NAME => 'test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.', STARTKEY => 'hhh', ENDKEY => 'iii'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,195 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,195 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 8316cb643e8db1f47659c2704a5d85bd from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,194 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:31,197 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=2fd443c241020be67cc0d08d473f5134, current state from region state map ={2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994870764, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,197 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(265): Transitioned from {2fd443c241020be67cc0d08d473f5134 state=OPENING, ts=1373994870764, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {2fd443c241020be67cc0d08d473f5134 state=OPEN, ts=1373994871197, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,198 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for 2fd443c241020be67cc0d08d473f5134 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,198 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 2fd443c241020be67cc0d08d473f5134 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,201 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined 23b3aa990a7ac4e12882f9d3eca30eea; next sequenceid=1 2013-07-16 17:14:31,201 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:31,202 INFO [StoreOpener-093d3ef494905701450f33a487333200-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,203 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/2fd443c241020be67cc0d08d473f5134 2013-07-16 17:14:31,203 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1218): The znode of test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. has been deleted, region state: {2fd443c241020be67cc0d08d473f5134 state=OPEN, ts=1373994871197, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,203 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,203 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(301): Onlined 2fd443c241020be67cc0d08d473f5134 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,204 INFO [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1223): The master has opened test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,204 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 8316cb643e8db1f47659c2704a5d85bd from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,204 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 8316cb643e8db1f47659c2704a5d85bd, NAME => 'test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.', STARTKEY => 'ppp', ENDKEY => 'qqq'} 2013-07-16 17:14:31,204 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:31,205 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:31,205 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 2fd443c241020be67cc0d08d473f5134 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,206 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:31,208 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=8316cb643e8db1f47659c2704a5d85bd, current state from region state map ={8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,208 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994871208, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,224 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 093d3ef494905701450f33a487333200; next sequenceid=1 2013-07-16 17:14:31,224 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 093d3ef494905701450f33a487333200 2013-07-16 17:14:31,226 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:31,232 INFO [StoreOpener-8316cb643e8db1f47659c2704a5d85bd-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,235 INFO [StoreOpener-8316cb643e8db1f47659c2704a5d85bd-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,241 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:31,243 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 8316cb643e8db1f47659c2704a5d85bd; next sequenceid=1 2013-07-16 17:14:31,244 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:31,270 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:31,276 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] catalog.MetaEditor(432): Updated row test,nnn,1373994853026.093d3ef494905701450f33a487333200. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,276 INFO [PostOpenDeployTasks:093d3ef494905701450f33a487333200] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:14:31,276 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 093d3ef494905701450f33a487333200 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,277 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] catalog.MetaEditor(432): Updated row test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,277 INFO [PostOpenDeployTasks:8316cb643e8db1f47659c2704a5d85bd] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:14:31,277 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 8316cb643e8db1f47659c2704a5d85bd from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,284 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:31,285 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 093d3ef494905701450f33a487333200 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,285 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 093d3ef494905701450f33a487333200, NAME => 'test,nnn,1373994853026.093d3ef494905701450f33a487333200.', STARTKEY => 'nnn', ENDKEY => 'ooo'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,285 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,nnn,1373994853026.093d3ef494905701450f33a487333200. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,285 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node c4611b71a935e3b170cd961ded7d0820 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,285 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:31,286 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 8316cb643e8db1f47659c2704a5d85bd from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,286 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 8316cb643e8db1f47659c2704a5d85bd, NAME => 'test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.', STARTKEY => 'ppp', ENDKEY => 'qqq'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,286 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,286 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 4ac8676e6af9c1c25f2f2a90ed99d3ae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,289 DEBUG [AM.ZK.Worker-pool-2-thread-9] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=8316cb643e8db1f47659c2704a5d85bd, current state from region state map ={8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994871208, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,289 INFO [AM.ZK.Worker-pool-2-thread-9] master.RegionStates(265): Transitioned from {8316cb643e8db1f47659c2704a5d85bd state=OPENING, ts=1373994871208, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {8316cb643e8db1f47659c2704a5d85bd state=OPEN, ts=1373994871289, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,289 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for 8316cb643e8db1f47659c2704a5d85bd from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,290 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 8316cb643e8db1f47659c2704a5d85bd that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,290 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=093d3ef494905701450f33a487333200, current state from region state map ={093d3ef494905701450f33a487333200 state=OPENING, ts=1373994871173, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,290 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(265): Transitioned from {093d3ef494905701450f33a487333200 state=OPENING, ts=1373994871173, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {093d3ef494905701450f33a487333200 state=OPEN, ts=1373994871290, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,290 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 093d3ef494905701450f33a487333200 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,290 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 093d3ef494905701450f33a487333200 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,296 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:31,297 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node c4611b71a935e3b170cd961ded7d0820 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,297 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => c4611b71a935e3b170cd961ded7d0820, NAME => 'test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.', STARTKEY => 'rrr', ENDKEY => 'sss'} 2013-07-16 17:14:31,298 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:31,298 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:31,298 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:31,298 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1218): The znode of test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. has been deleted, region state: {8316cb643e8db1f47659c2704a5d85bd state=OPEN, ts=1373994871289, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,298 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(301): Onlined 8316cb643e8db1f47659c2704a5d85bd on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,299 INFO [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1223): The master has opened test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,299 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,299 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 4ac8676e6af9c1c25f2f2a90ed99d3ae from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,299 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 8316cb643e8db1f47659c2704a5d85bd in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,299 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(4192): Open {ENCODED => 4ac8676e6af9c1c25f2f2a90ed99d3ae, NAME => 'test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.', STARTKEY => 'ttt', ENDKEY => 'uuu'} 2013-07-16 17:14:31,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:31,300 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(534): Instantiated test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:31,301 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:31,301 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/093d3ef494905701450f33a487333200 2013-07-16 17:14:31,301 DEBUG [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1218): The znode of test,nnn,1373994853026.093d3ef494905701450f33a487333200. has been deleted, region state: {093d3ef494905701450f33a487333200 state=OPEN, ts=1373994871290, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,301 INFO [AM.ZK.Worker-pool-2-thread-3] master.RegionStates(301): Onlined 093d3ef494905701450f33a487333200 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,301 INFO [AM.ZK.Worker-pool-2-thread-3] master.AssignmentManager$4(1223): The master has opened test,nnn,1373994853026.093d3ef494905701450f33a487333200. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,301 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 093d3ef494905701450f33a487333200 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,303 DEBUG [AM.ZK.Worker-pool-2-thread-12] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=c4611b71a935e3b170cd961ded7d0820, current state from region state map ={c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,303 INFO [AM.ZK.Worker-pool-2-thread-12] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994871303, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,303 DEBUG [AM.ZK.Worker-pool-2-thread-6] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=4ac8676e6af9c1c25f2f2a90ed99d3ae, current state from region state map ={4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,304 INFO [AM.ZK.Worker-pool-2-thread-6] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994871304, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,313 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] catalog.MetaEditor(432): Updated row test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,313 INFO [PostOpenDeployTasks:23b3aa990a7ac4e12882f9d3eca30eea] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:14:31,314 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 23b3aa990a7ac4e12882f9d3eca30eea from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,315 INFO [StoreOpener-c4611b71a935e3b170cd961ded7d0820-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,317 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:31,318 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 23b3aa990a7ac4e12882f9d3eca30eea from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,318 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 23b3aa990a7ac4e12882f9d3eca30eea, NAME => 'test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.', STARTKEY => 'lll', ENDKEY => 'mmm'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,318 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,318 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node baee7b76d51e7196ee3121edc50bda59 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,319 DEBUG [AM.ZK.Worker-pool-2-thread-18] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=23b3aa990a7ac4e12882f9d3eca30eea, current state from region state map ={23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994871148, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,320 INFO [AM.ZK.Worker-pool-2-thread-18] master.RegionStates(265): Transitioned from {23b3aa990a7ac4e12882f9d3eca30eea state=OPENING, ts=1373994871148, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {23b3aa990a7ac4e12882f9d3eca30eea state=OPEN, ts=1373994871319, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,320 INFO [StoreOpener-c4611b71a935e3b170cd961ded7d0820-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,320 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] handler.OpenedRegionHandler(145): Handling OPENED event for 23b3aa990a7ac4e12882f9d3eca30eea from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,320 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 23b3aa990a7ac4e12882f9d3eca30eea that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,322 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:31,322 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node baee7b76d51e7196ee3121edc50bda59 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,322 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(4192): Open {ENCODED => baee7b76d51e7196ee3121edc50bda59, NAME => 'test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.', STARTKEY => 'www', ENDKEY => 'xxx'} 2013-07-16 17:14:31,323 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:31,323 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(534): Instantiated test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:31,326 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:31,326 DEBUG [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1218): The znode of test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. has been deleted, region state: {23b3aa990a7ac4e12882f9d3eca30eea state=OPEN, ts=1373994871319, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,326 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,326 INFO [AM.ZK.Worker-pool-2-thread-13] master.RegionStates(301): Onlined 23b3aa990a7ac4e12882f9d3eca30eea on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,327 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-0] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 23b3aa990a7ac4e12882f9d3eca30eea in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,327 INFO [AM.ZK.Worker-pool-2-thread-13] master.AssignmentManager$4(1223): The master has opened test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,327 DEBUG [AM.ZK.Worker-pool-2-thread-4] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=baee7b76d51e7196ee3121edc50bda59, current state from region state map ={baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,328 INFO [AM.ZK.Worker-pool-2-thread-4] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994871328, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:31,345 INFO [StoreOpener-4ac8676e6af9c1c25f2f2a90ed99d3ae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,347 INFO [StoreOpener-baee7b76d51e7196ee3121edc50bda59-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,350 INFO [StoreOpener-4ac8676e6af9c1c25f2f2a90ed99d3ae-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,351 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined c4611b71a935e3b170cd961ded7d0820; next sequenceid=1 2013-07-16 17:14:31,351 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:31,352 INFO [StoreOpener-baee7b76d51e7196ee3121edc50bda59-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,358 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:31,360 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(629): Onlined 4ac8676e6af9c1c25f2f2a90ed99d3ae; next sequenceid=1 2013-07-16 17:14:31,361 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:31,369 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(629): Onlined baee7b76d51e7196ee3121edc50bda59; next sequenceid=1 2013-07-16 17:14:31,369 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:31,374 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] catalog.MetaEditor(432): Updated row test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,375 INFO [PostOpenDeployTasks:c4611b71a935e3b170cd961ded7d0820] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:14:31,375 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node c4611b71a935e3b170cd961ded7d0820 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,376 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:31,382 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:31,382 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node c4611b71a935e3b170cd961ded7d0820 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,382 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => c4611b71a935e3b170cd961ded7d0820, NAME => 'test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.', STARTKEY => 'rrr', ENDKEY => 'sss'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,382 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,383 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 6ca2c5a98917cab87c982b4bbb7e0115 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,386 DEBUG [AM.ZK.Worker-pool-2-thread-20] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=c4611b71a935e3b170cd961ded7d0820, current state from region state map ={c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994871303, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,386 INFO [AM.ZK.Worker-pool-2-thread-20] master.RegionStates(265): Transitioned from {c4611b71a935e3b170cd961ded7d0820 state=OPENING, ts=1373994871303, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {c4611b71a935e3b170cd961ded7d0820 state=OPEN, ts=1373994871386, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,386 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] handler.OpenedRegionHandler(145): Handling OPENED event for c4611b71a935e3b170cd961ded7d0820 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,387 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for c4611b71a935e3b170cd961ded7d0820 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,388 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:31,391 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:31,392 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/c4611b71a935e3b170cd961ded7d0820 2013-07-16 17:14:31,392 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,392 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-3] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region c4611b71a935e3b170cd961ded7d0820 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,392 DEBUG [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1218): The znode of test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. has been deleted, region state: {c4611b71a935e3b170cd961ded7d0820 state=OPEN, ts=1373994871386, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,392 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 6ca2c5a98917cab87c982b4bbb7e0115 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2013-07-16 17:14:31,393 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(4192): Open {ENCODED => 6ca2c5a98917cab87c982b4bbb7e0115, NAME => 'test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.', STARTKEY => 'yyy', ENDKEY => 'zzz'} 2013-07-16 17:14:31,393 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.MetricsRegionSourceImpl(62): Creating new MetricsRegionSourceImpl for table test 6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:31,394 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(534): Instantiated test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:31,393 INFO [AM.ZK.Worker-pool-2-thread-14] master.RegionStates(301): Onlined c4611b71a935e3b170cd961ded7d0820 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,394 INFO [AM.ZK.Worker-pool-2-thread-14] master.AssignmentManager$4(1223): The master has opened test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,395 DEBUG [AM.ZK.Worker-pool-2-thread-16] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENING, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=6ca2c5a98917cab87c982b4bbb7e0115, current state from region state map ={6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,395 INFO [AM.ZK.Worker-pool-2-thread-16] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=PENDING_OPEN, ts=1373994868016, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994871395, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,395 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:31,409 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] catalog.MetaEditor(432): Updated row test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,409 INFO [PostOpenDeployTasks:4ac8676e6af9c1c25f2f2a90ed99d3ae] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:14:31,409 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 4ac8676e6af9c1c25f2f2a90ed99d3ae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,414 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:31,416 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:31,425 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:31,426 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 4ac8676e6af9c1c25f2f2a90ed99d3ae from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,426 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 4ac8676e6af9c1c25f2f2a90ed99d3ae, NAME => 'test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.', STARTKEY => 'ttt', ENDKEY => 'uuu'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,426 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-0] handler.OpenRegionHandler(186): Opened test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,428 DEBUG [AM.ZK.Worker-pool-2-thread-1] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=4ac8676e6af9c1c25f2f2a90ed99d3ae, current state from region state map ={4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994871304, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,428 INFO [AM.ZK.Worker-pool-2-thread-1] master.RegionStates(265): Transitioned from {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPENING, ts=1373994871304, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPEN, ts=1373994871428, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,428 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] handler.OpenedRegionHandler(145): Handling OPENED event for 4ac8676e6af9c1c25f2f2a90ed99d3ae from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,428 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 4ac8676e6af9c1c25f2f2a90ed99d3ae that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,431 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/4ac8676e6af9c1c25f2f2a90ed99d3ae 2013-07-16 17:14:31,432 DEBUG [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager$4(1218): The znode of test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. has been deleted, region state: {4ac8676e6af9c1c25f2f2a90ed99d3ae state=OPEN, ts=1373994871428, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,432 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,432 INFO [AM.ZK.Worker-pool-2-thread-17] master.RegionStates(301): Onlined 4ac8676e6af9c1c25f2f2a90ed99d3ae on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,432 INFO [AM.ZK.Worker-pool-2-thread-17] master.AssignmentManager$4(1223): The master has opened test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,432 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-1] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 4ac8676e6af9c1c25f2f2a90ed99d3ae in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,445 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] catalog.MetaEditor(432): Updated row test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,445 INFO [PostOpenDeployTasks:baee7b76d51e7196ee3121edc50bda59] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:14:31,446 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node baee7b76d51e7196ee3121edc50bda59 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,449 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:31,449 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node baee7b76d51e7196ee3121edc50bda59 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,450 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => baee7b76d51e7196ee3121edc50bda59, NAME => 'test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.', STARTKEY => 'www', ENDKEY => 'xxx'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,450 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-1] handler.OpenRegionHandler(186): Opened test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,451 DEBUG [AM.ZK.Worker-pool-2-thread-15] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=baee7b76d51e7196ee3121edc50bda59, current state from region state map ={baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994871328, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,451 INFO [AM.ZK.Worker-pool-2-thread-15] master.RegionStates(265): Transitioned from {baee7b76d51e7196ee3121edc50bda59 state=OPENING, ts=1373994871328, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {baee7b76d51e7196ee3121edc50bda59 state=OPEN, ts=1373994871451, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,451 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] handler.OpenedRegionHandler(145): Handling OPENED event for baee7b76d51e7196ee3121edc50bda59 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,451 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for baee7b76d51e7196ee3121edc50bda59 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,455 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/baee7b76d51e7196ee3121edc50bda59 2013-07-16 17:14:31,455 DEBUG [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1218): The znode of test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. has been deleted, region state: {baee7b76d51e7196ee3121edc50bda59 state=OPEN, ts=1373994871451, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,455 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,455 INFO [AM.ZK.Worker-pool-2-thread-19] master.RegionStates(301): Onlined baee7b76d51e7196ee3121edc50bda59 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,455 INFO [AM.ZK.Worker-pool-2-thread-19] master.AssignmentManager$4(1223): The master has opened test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,455 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-4] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region baee7b76d51e7196ee3121edc50bda59 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,460 INFO [StoreOpener-6ca2c5a98917cab87c982b4bbb7e0115-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,468 INFO [StoreOpener-6ca2c5a98917cab87c982b4bbb7e0115-1] compactions.CompactionConfiguration(85): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2013-07-16 17:14:31,469 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x659046f7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:31,474 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x659046f7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:31,475 INFO [RS_OPEN_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(629): Onlined 6ca2c5a98917cab87c982b4bbb7e0115; next sequenceid=1 2013-07-16 17:14:31,475 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(598): regionserver:49041-0x13fe879789b0006 Attempting to retransition the opening state of node 6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:31,477 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x659046f7-0x13fe879789b0024 connected 2013-07-16 17:14:31,485 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] regionserver.HRegionServer(1703): Post open deploy tasks for region=test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:31,497 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] catalog.MetaEditor(432): Updated row test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. with server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,498 INFO [PostOpenDeployTasks:6ca2c5a98917cab87c982b4bbb7e0115] regionserver.HRegionServer(1728): Done with post open deploy task for region=test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:14:31,498 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(786): regionserver:49041-0x13fe879789b0006 Attempting to transition node 6ca2c5a98917cab87c982b4bbb7e0115 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,514 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:31,514 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] zookeeper.ZKAssign(862): regionserver:49041-0x13fe879789b0006 Successfully transitioned node 6ca2c5a98917cab87c982b4bbb7e0115 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2013-07-16 17:14:31,515 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: {ENCODED => 6ca2c5a98917cab87c982b4bbb7e0115, NAME => 'test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.', STARTKEY => 'yyy', ENDKEY => 'zzz'}, server: ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,515 DEBUG [RS_OPEN_REGION-ip-10-197-55-49:49041-2] handler.OpenRegionHandler(186): Opened test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. on server:ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,518 DEBUG [AM.ZK.Worker-pool-2-thread-7] master.AssignmentManager(767): Handling transition=RS_ZK_REGION_OPENED, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736, region=6ca2c5a98917cab87c982b4bbb7e0115, current state from region state map ={6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994871395, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,518 INFO [AM.ZK.Worker-pool-2-thread-7] master.RegionStates(265): Transitioned from {6ca2c5a98917cab87c982b4bbb7e0115 state=OPENING, ts=1373994871395, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} to {6ca2c5a98917cab87c982b4bbb7e0115 state=OPEN, ts=1373994871518, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,519 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] handler.OpenedRegionHandler(145): Handling OPENED event for 6ca2c5a98917cab87c982b4bbb7e0115 from ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; deleting unassigned node 2013-07-16 17:14:31,519 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(405): master:50904-0x13fe879789b0004 Deleting existing unassigned node for 6ca2c5a98917cab87c982b4bbb7e0115 that is in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,543 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:31,543 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0024 2013-07-16 17:14:31,549 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:31,551 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:31,554 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:31,559 DEBUG [MASTER_OPEN_REGION-ip-10-197-55-49:50904-2] zookeeper.ZKAssign(434): master:50904-0x13fe879789b0004 Successfully deleted unassigned node for region 6ca2c5a98917cab87c982b4bbb7e0115 in expected state RS_ZK_REGION_OPENED 2013-07-16 17:14:31,559 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/region-in-transition/6ca2c5a98917cab87c982b4bbb7e0115 2013-07-16 17:14:31,560 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/region-in-transition 2013-07-16 17:14:31,560 DEBUG [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1218): The znode of test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. has been deleted, region state: {6ca2c5a98917cab87c982b4bbb7e0115 state=OPEN, ts=1373994871518, server=ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736} 2013-07-16 17:14:31,560 INFO [AM.ZK.Worker-pool-2-thread-5] master.RegionStates(301): Onlined 6ca2c5a98917cab87c982b4bbb7e0115 on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,560 INFO [AM.ZK.Worker-pool-2-thread-5] master.AssignmentManager$4(1223): The master has opened test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. that was online on ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:14:31,589 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x771d5915 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:31,601 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x771d5915 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:31,603 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x771d5915-0x13fe879789b0025 connected 2013-07-16 17:14:31,613 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:31,614 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0025 2013-07-16 17:14:31,744 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x212b0f8a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:31,744 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x212b0f8a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:31,748 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x212b0f8a-0x13fe879789b0026 connected 2013-07-16 17:14:31,760 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:31,760 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0026 2013-07-16 17:14:31,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 9281, total replicated edits: 1992 2013-07-16 17:14:31,971 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x976e8dc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:31,979 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x976e8dc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:31,981 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x976e8dc-0x13fe879789b0027 connected 2013-07-16 17:14:31,987 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:31,987 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0027 2013-07-16 17:14:32,297 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x56b235e6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:32,300 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x56b235e6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:32,301 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x56b235e6-0x13fe879789b0028 connected 2013-07-16 17:14:32,320 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:32,321 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0028 2013-07-16 17:14:32,327 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:32,329 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:32,330 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:32,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:32,364 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4316d76a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:32,373 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4316d76a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:32,374 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4316d76a-0x13fe879789b0029 connected 2013-07-16 17:14:32,391 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:32,391 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0029 2013-07-16 17:14:32,419 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:32,428 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:32,430 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:32,518 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x737f7f6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:32,518 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x737f7f6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:32,520 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x737f7f6-0x13fe879789b002a connected 2013-07-16 17:14:32,554 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:32,555 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002a 2013-07-16 17:14:32,763 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5089d5a5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:32,764 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5089d5a5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:32,767 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5089d5a5-0x13fe879789b002b connected 2013-07-16 17:14:32,780 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:32,780 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002b 2013-07-16 17:14:32,785 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:32,785 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:33,096 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6e63f7c7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,102 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6e63f7c7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,104 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6e63f7c7-0x13fe879789b002c connected 2013-07-16 17:14:33,111 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,111 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002c 2013-07-16 17:14:33,124 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:33,126 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:33,129 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:33,154 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6287ecac connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,158 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6287ecac Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,160 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6287ecac-0x13fe879789b002d connected 2013-07-16 17:14:33,173 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,173 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002d 2013-07-16 17:14:33,280 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3abd7ff4 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,282 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3abd7ff4 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,283 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3abd7ff4-0x13fe879789b002e connected 2013-07-16 17:14:33,293 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,293 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002e 2013-07-16 17:14:33,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:33,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:33,340 ERROR [IPC Server handler 7 on 35081] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user (auth:SIMPLE) cause:java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 1041, block=blk_4297992342878601848_1041, replica=FinalizedReplica, blk_4297992342878601848_1041, FINALIZED getNumBytes() = 794 getBytesOnDisk() = 794 getVisibleLength()= 794 getVolume() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current getBlockFile() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848 unlinked =false 2013-07-16 17:14:33,341 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] datanode.DataNode(1894): Failed to obtain replica info for block (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from datanode (=127.0.0.1:47006) java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 1041, block=blk_4297992342878601848_1041, replica=FinalizedReplica, blk_4297992342878601848_1041, FINALIZED getNumBytes() = 794 getBytesOnDisk() = 794 getVisibleLength()= 794 getVolume() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current getBlockFile() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848 unlinked =false at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1462) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1422) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:1801) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2198) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:1814) at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1880) at org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:215) at org.apache.hadoop.hdfs.server.datanode.DataNode$2.run(DataNode.java:1786) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): THIS IS NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 1041, block=blk_4297992342878601848_1041, replica=FinalizedReplica, blk_4297992342878601848_1041, FINALIZED getNumBytes() = 794 getBytesOnDisk() = 794 getVisibleLength()= 794 getVolume() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current getBlockFile() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848 unlinked =false at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1462) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1422) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:1801) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2198) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy24.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:1812) ... 4 more 2013-07-16 17:14:33,342 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] datanode.DataNode(1894): Failed to obtain replica info for block (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from datanode (=127.0.0.1:51438) java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 1041, block=blk_4297992342878601848_1041, replica=FinalizedReplica, blk_4297992342878601848_1041, FINALIZED getNumBytes() = 794 getBytesOnDisk() = 794 getVisibleLength()= 794 getVolume() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data1/current getBlockFile() = /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data1/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848 unlinked =false at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1462) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1422) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:1801) at org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:1812) at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1880) at org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:215) at org.apache.hadoop.hdfs.server.datanode.DataNode$2.run(DataNode.java:1786) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:33,342 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] datanode.DataNode$2(1788): recoverBlocks FAILED: RecoveringBlock{BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041; getBlockSize()=794; corrupt=false; offset=-1; locs=[127.0.0.1:47006, 127.0.0.1:51438]} java.io.IOException: All datanodes failed: block=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041, datanodeids=[127.0.0.1:47006, 127.0.0.1:51438] at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1901) at org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:215) at org.apache.hadoop.hdfs.server.datanode.DataNode$2.run(DataNode.java:1786) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:33,433 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:33,437 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:33,440 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:33,501 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x76bb5e95 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,504 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x76bb5e95 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,505 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x76bb5e95-0x13fe879789b002f connected 2013-07-16 17:14:33,513 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,514 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b002f 2013-07-16 17:14:33,819 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x68758d51 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,822 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x68758d51 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,823 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x68758d51-0x13fe879789b0030 connected 2013-07-16 17:14:33,832 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,833 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0030 2013-07-16 17:14:33,836 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:33,838 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:33,839 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:33,872 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x61e076f3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:33,876 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x61e076f3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:33,877 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x61e076f3-0x13fe879789b0031 connected 2013-07-16 17:14:33,894 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:33,894 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0031 2013-07-16 17:14:34,003 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cca17e2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:34,005 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4cca17e2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:34,007 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4cca17e2-0x13fe879789b0032 connected 2013-07-16 17:14:34,020 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:34,020 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0032 2013-07-16 17:14:34,229 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cd25db6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:34,234 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4cd25db6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:34,235 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4cd25db6-0x13fe879789b0033 connected 2013-07-16 17:14:34,244 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:34,244 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0033 2013-07-16 17:14:34,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:34,442 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:34,447 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:34,450 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:34,559 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1926078f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:34,559 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1926078f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:34,562 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1926078f-0x13fe879789b0034 connected 2013-07-16 17:14:34,571 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:34,572 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0034 2013-07-16 17:14:34,580 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:34,582 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:34,584 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:34,638 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x768f310f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:34,651 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x768f310f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:34,652 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x768f310f-0x13fe879789b0035 connected 2013-07-16 17:14:34,680 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:34,681 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0035 2013-07-16 17:14:34,787 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x72381dc2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:34,790 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x72381dc2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:34,791 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x72381dc2-0x13fe879789b0036 connected 2013-07-16 17:14:34,800 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:34,800 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0036 2013-07-16 17:14:34,805 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:34,805 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:35,017 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x647f84c9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:35,017 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x647f84c9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:35,019 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x647f84c9-0x13fe879789b0037 connected 2013-07-16 17:14:35,028 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:35,029 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0037 2013-07-16 17:14:35,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:35,366 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xed5ad5d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:35,369 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xed5ad5d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:35,373 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xed5ad5d-0x13fe879789b0038 connected 2013-07-16 17:14:35,389 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:35,389 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0038 2013-07-16 17:14:35,394 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:35,396 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:35,398 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:35,421 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x44df3a5b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:35,422 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x44df3a5b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:35,423 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x44df3a5b-0x13fe879789b0039 connected 2013-07-16 17:14:35,453 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:35,461 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:35,462 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0039 2013-07-16 17:14:35,471 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 0, fileLength: 0, trailerPresent: false 2013-07-16 17:14:35,474 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Nothing to replicate, sleeping 100 times 10 2013-07-16 17:14:35,577 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7021d740 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:35,582 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7021d740 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:35,584 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7021d740-0x13fe879789b003a connected 2013-07-16 17:14:35,591 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:35,592 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003a 2013-07-16 17:14:35,817 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3158a9f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:35,817 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3158a9f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:35,820 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3158a9f-0x13fe879789b003b connected 2013-07-16 17:14:35,834 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:35,835 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003b 2013-07-16 17:14:36,009 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:36,019 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_6921217736172272009_1082{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:36,020 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_6921217736172272009_1082{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:36,023 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 with entries=210, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876009 2013-07-16 17:14:36,150 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x14f8e8b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,154 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x14f8e8b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,156 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x14f8e8b-0x13fe879789b003c connected 2013-07-16 17:14:36,167 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,167 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003c 2013-07-16 17:14:36,170 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:36,173 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:36,174 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:36,185 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x14dc033a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,192 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x14dc033a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,195 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x14dc033a-0x13fe879789b003d connected 2013-07-16 17:14:36,210 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,210 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003d 2013-07-16 17:14:36,318 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x71282b42 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,320 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x71282b42 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,322 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x71282b42-0x13fe879789b003e connected 2013-07-16 17:14:36,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:36,343 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,343 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003e 2013-07-16 17:14:36,426 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:36,443 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 1261 synced till here 1260 2013-07-16 17:14:36,449 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3893008240856145307_1125{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:36,450 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3893008240856145307_1125{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:36,454 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876009 with entries=216, filesize=20.7 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876427 2013-07-16 17:14:36,475 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] fs.HFileSystem$ReorderWALBlocks(327): /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 is an HLog file, so reordering blocks, last hostname will be:ip-10-197-55-49.us-west-1.compute.internal 2013-07-16 17:14:36,478 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] wal.ProtobufLogReader(118): After reading the trailer: walEditsStopOffset: 20610, fileLength: 20618, trailerPresent: true 2013-07-16 17:14:36,508 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x458dd138 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,510 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x458dd138 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,513 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x458dd138-0x13fe879789b003f connected 2013-07-16 17:14:36,537 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,538 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b003f 2013-07-16 17:14:36,569 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5d94f2e8 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,569 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5d94f2e8 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,571 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5d94f2e8-0x13fe879789b0040 connected 2013-07-16 17:14:36,587 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,587 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0040 2013-07-16 17:14:36,643 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x13979163 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,646 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x13979163 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,647 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x13979163-0x13fe879789b0041 connected 2013-07-16 17:14:36,656 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,656 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0041 2013-07-16 17:14:36,846 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:36,859 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2807142991711637664_1127{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:36,860 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2807142991711637664_1127{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:36,863 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x373fdd1a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,868 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876427 with entries=207, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876847 2013-07-16 17:14:36,869 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x373fdd1a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,870 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x373fdd1a-0x13fe879789b0042 connected 2013-07-16 17:14:36,881 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,882 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0042 2013-07-16 17:14:36,884 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:36,884 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:36,899 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x73def740 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 14281, total replicated edits: 1992 2013-07-16 17:14:36,905 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x73def740 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,906 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x73def740-0x13fe879789b0043 connected 2013-07-16 17:14:36,922 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,922 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0043 2013-07-16 17:14:36,927 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:36,929 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:36,934 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:36,947 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x75724060 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:36,959 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x75724060 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:36,962 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x75724060-0x13fe879789b0044 connected 2013-07-16 17:14:36,971 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:36,971 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0044 2013-07-16 17:14:37,084 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x23bfcec0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,087 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x23bfcec0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,089 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x23bfcec0-0x13fe879789b0045 connected 2013-07-16 17:14:37,098 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,099 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0045 2013-07-16 17:14:37,190 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x10614f3d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,193 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x10614f3d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,194 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x10614f3d-0x13fe879789b0046 connected 2013-07-16 17:14:37,203 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,204 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0046 2013-07-16 17:14:37,296 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:37,297 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:37,299 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:37,306 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x51d41964 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,308 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x51d41964 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,309 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x51d41964-0x13fe879789b0047 connected 2013-07-16 17:14:37,319 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,319 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0047 2013-07-16 17:14:37,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:37,424 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x72adb267 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,427 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x72adb267 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,428 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x72adb267-0x13fe879789b0048 connected 2013-07-16 17:14:37,430 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:37,437 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,438 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0048 2013-07-16 17:14:37,438 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-4955981052321751881_1129{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:37,439 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-4955981052321751881_1129 size 20595 2013-07-16 17:14:37,442 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876847 with entries=210, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877430 2013-07-16 17:14:37,627 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x753b460f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,629 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x753b460f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,630 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x753b460f-0x13fe879789b0049 connected 2013-07-16 17:14:37,637 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,637 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0049 2013-07-16 17:14:37,640 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:37,642 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:37,642 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6419033d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,644 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:37,645 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6419033d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,647 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6419033d-0x13fe879789b004a connected 2013-07-16 17:14:37,657 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,657 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004a 2013-07-16 17:14:37,764 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6c93230d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,766 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6c93230d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,767 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6c93230d-0x13fe879789b004b connected 2013-07-16 17:14:37,772 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:37,777 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,777 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004b 2013-07-16 17:14:37,790 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4033948962370213907_1131{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:37,791 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4033948962370213907_1131 size 20889 2013-07-16 17:14:37,794 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877430 with entries=213, filesize=20.4 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877772 2013-07-16 17:14:37,965 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3876e500 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:37,973 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3876e500 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:37,974 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3876e500-0x13fe879789b004c connected 2013-07-16 17:14:37,987 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:37,987 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004c 2013-07-16 17:14:37,992 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:37,993 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:37,995 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x11e35aae connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,006 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:38,015 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x11e35aae Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,016 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x11e35aae-0x13fe879789b004d connected 2013-07-16 17:14:38,027 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,027 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004d 2013-07-16 17:14:38,134 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3b9690f6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,138 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3b9690f6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,140 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3b9690f6-0x13fe879789b004e connected 2013-07-16 17:14:38,155 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,156 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004e 2013-07-16 17:14:38,223 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:38,230 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 2098 synced till here 2097 2013-07-16 17:14:38,234 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6669661333478793532_1133{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:38,235 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6669661333478793532_1133{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:38,238 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877772 with entries=207, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878224 2013-07-16 17:14:38,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:38,334 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x649b5c7c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,337 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:38,339 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x649b5c7c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,340 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x649b5c7c-0x13fe879789b004f connected 2013-07-16 17:14:38,352 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,352 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b004f 2013-07-16 17:14:38,355 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:38,357 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:38,359 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:38,370 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x36446fd2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,376 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x36446fd2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,377 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x36446fd2-0x13fe879789b0050 connected 2013-07-16 17:14:38,387 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,387 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0050 2013-07-16 17:14:38,501 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3b9d0e89 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,502 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3b9d0e89 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,505 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3b9d0e89-0x13fe879789b0051 connected 2013-07-16 17:14:38,525 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,525 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0051 2013-07-16 17:14:38,642 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:38,652 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 2307 synced till here 2306 2013-07-16 17:14:38,654 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5411945272781053050_1135{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:38,655 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5411945272781053050_1135{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:38,658 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878224 with entries=209, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878642 2013-07-16 17:14:38,659 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 8ad63e6b6a48baaedae6985e87d53061 2013-07-16 17:14:38,689 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061., current region memstore size 115.3 K 2013-07-16 17:14:38,695 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:38,715 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x73d17d67 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,717 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x73d17d67 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,718 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x73d17d67-0x13fe879789b0052 connected 2013-07-16 17:14:38,737 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,737 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0052 2013-07-16 17:14:38,740 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:38,742 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:38,743 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:38,771 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2cdbf42e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,772 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2cdbf42e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,773 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2cdbf42e-0x13fe879789b0053 connected 2013-07-16 17:14:38,794 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,794 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0053 2013-07-16 17:14:38,801 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5906492748155731015_1139{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:38,805 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5906492748155731015_1139{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:38,808 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=2569, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/8ad63e6b6a48baaedae6985e87d53061/.tmp/bfd5bf8cda4e4203872232f1cb4ec417 2013-07-16 17:14:38,818 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:38,819 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:38,826 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/8ad63e6b6a48baaedae6985e87d53061/.tmp/bfd5bf8cda4e4203872232f1cb4ec417 as hdfs://localhost:43175/user/ec2-user/hbase/test/8ad63e6b6a48baaedae6985e87d53061/f/bfd5bf8cda4e4203872232f1cb4ec417 2013-07-16 17:14:38,836 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/8ad63e6b6a48baaedae6985e87d53061/f/bfd5bf8cda4e4203872232f1cb4ec417, entries=703, sequenceid=2569, filesize=21.2 K 2013-07-16 17:14:38,837 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. in 148ms, sequenceid=2569, compaction requested=false 2013-07-16 17:14:38,903 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1f16ebd3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:38,907 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1f16ebd3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:38,908 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1f16ebd3-0x13fe879789b0054 connected 2013-07-16 17:14:38,917 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:38,917 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0054 2013-07-16 17:14:38,920 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:38,920 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:39,053 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6acbf29d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,062 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6acbf29d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,063 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6acbf29d-0x13fe879789b0055 connected 2013-07-16 17:14:39,087 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,087 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0055 2013-07-16 17:14:39,104 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:39,106 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:39,109 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:39,128 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7cab0718 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,136 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7cab0718 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,138 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7cab0718-0x13fe879789b0056 connected 2013-07-16 17:14:39,170 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,171 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0056 2013-07-16 17:14:39,224 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:39,239 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 2518 synced till here 2517 2013-07-16 17:14:39,243 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-558109180225790554_1137{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:39,244 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-558109180225790554_1137{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:39,248 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878642 with entries=211, filesize=20.2 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994879225 2013-07-16 17:14:39,249 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 237 2013-07-16 17:14:39,249 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 whose highest sequenceid is 210 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 2013-07-16 17:14:39,253 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): d88c6958af6ef781dd9834d0369f4f70 2013-07-16 17:14:39,253 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70., current region memstore size 115.3 K 2013-07-16 17:14:39,257 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:39,279 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xb35f26b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,281 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xb35f26b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,282 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xb35f26b-0x13fe879789b0057 connected 2013-07-16 17:14:39,299 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,299 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0057 2013-07-16 17:14:39,338 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:39,342 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4031222876721630781_1143{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:39,345 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4031222876721630781_1143 size 21685 2013-07-16 17:14:39,347 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=2781, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/d88c6958af6ef781dd9834d0369f4f70/.tmp/38fce71aef9b403099ec256eed84ca8e 2013-07-16 17:14:39,354 ERROR [IPC Server handler 2 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:39,355 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:39,360 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/d88c6958af6ef781dd9834d0369f4f70/.tmp/38fce71aef9b403099ec256eed84ca8e as hdfs://localhost:43175/user/ec2-user/hbase/test/d88c6958af6ef781dd9834d0369f4f70/f/38fce71aef9b403099ec256eed84ca8e 2013-07-16 17:14:39,369 ERROR [IPC Server handler 3 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:39,371 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:14:39,375 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/d88c6958af6ef781dd9834d0369f4f70/f/38fce71aef9b403099ec256eed84ca8e, entries=703, sequenceid=2781, filesize=21.2 K 2013-07-16 17:14:39,375 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. in 122ms, sequenceid=2781, compaction requested=false 2013-07-16 17:14:39,477 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x34b403c7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,481 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x34b403c7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,482 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x34b403c7-0x13fe879789b0058 connected 2013-07-16 17:14:39,503 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,503 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0058 2013-07-16 17:14:39,516 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:39,517 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:39,520 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:39,528 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x327ff40e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,535 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x327ff40e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,549 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x327ff40e-0x13fe879789b0059 connected 2013-07-16 17:14:39,550 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,550 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0059 2013-07-16 17:14:39,670 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x367dd26e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,671 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x367dd26e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,685 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x367dd26e-0x13fe879789b005a connected 2013-07-16 17:14:39,687 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:39,699 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5498990914897212607_1141{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:39,700 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,701 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005a 2013-07-16 17:14:39,702 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5498990914897212607_1141{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:39,706 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994879225 with entries=208, filesize=19.9 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994879687 2013-07-16 17:14:39,706 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 470 2013-07-16 17:14:39,706 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 whose highest sequenceid is 420 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994857501 2013-07-16 17:14:39,709 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): b9cbc55dd9bcb588274e2598633563b2 2013-07-16 17:14:39,710 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2., current region memstore size 115.3 K 2013-07-16 17:14:39,713 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:39,754 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_8173506954272787615_1147{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:39,755 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_8173506954272787615_1147{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:39,771 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=2990, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/b9cbc55dd9bcb588274e2598633563b2/.tmp/90eba978d4a14cb0ad4f05821e5ba903 2013-07-16 17:14:39,781 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/b9cbc55dd9bcb588274e2598633563b2/.tmp/90eba978d4a14cb0ad4f05821e5ba903 as hdfs://localhost:43175/user/ec2-user/hbase/test/b9cbc55dd9bcb588274e2598633563b2/f/90eba978d4a14cb0ad4f05821e5ba903 2013-07-16 17:14:39,789 ERROR [IPC Server handler 4 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:39,791 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:14:39,792 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/b9cbc55dd9bcb588274e2598633563b2/f/90eba978d4a14cb0ad4f05821e5ba903, entries=703, sequenceid=2990, filesize=21.2 K 2013-07-16 17:14:39,793 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. in 83ms, sequenceid=2990, compaction requested=false 2013-07-16 17:14:39,813 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x285532ca connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,819 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x285532ca Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,820 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x285532ca-0x13fe879789b005b connected 2013-07-16 17:14:39,831 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,831 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005b 2013-07-16 17:14:39,835 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:39,837 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:39,839 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:39,877 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x50f73eb3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,878 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x50f73eb3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,879 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x50f73eb3-0x13fe879789b005c connected 2013-07-16 17:14:39,894 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,894 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005c 2013-07-16 17:14:39,914 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4e24124f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:39,917 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4e24124f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:39,918 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4e24124f-0x13fe879789b005d connected 2013-07-16 17:14:39,941 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:39,941 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005d 2013-07-16 17:14:40,005 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x400a3bdc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,011 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x400a3bdc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,013 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x400a3bdc-0x13fe879789b005e connected 2013-07-16 17:14:40,018 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,018 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005e 2013-07-16 17:14:40,188 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:40,200 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7000549823384652560_1145{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,201 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7000549823384652560_1145{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,206 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994879687 with entries=209, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994880188 2013-07-16 17:14:40,206 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 707 2013-07-16 17:14:40,206 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 whose highest sequenceid is 628 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994859600 2013-07-16 17:14:40,210 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 7050f74c0058e5a7a912d72a5fd1f4fa 2013-07-16 17:14:40,212 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa., current region memstore size 115.3 K 2013-07-16 17:14:40,221 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:40,258 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x232dd375 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,267 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x232dd375 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,268 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x232dd375-0x13fe879789b005f connected 2013-07-16 17:14:40,295 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,295 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b005f 2013-07-16 17:14:40,297 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-5945731231502454653_1151{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,301 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:40,302 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:40,306 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:40,307 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-5945731231502454653_1151{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,312 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=3200, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/7050f74c0058e5a7a912d72a5fd1f4fa/.tmp/492184375b4749758634cd79283ad366 2013-07-16 17:14:40,320 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/7050f74c0058e5a7a912d72a5fd1f4fa/.tmp/492184375b4749758634cd79283ad366 as hdfs://localhost:43175/user/ec2-user/hbase/test/7050f74c0058e5a7a912d72a5fd1f4fa/f/492184375b4749758634cd79283ad366 2013-07-16 17:14:40,321 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5836e2bf connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,329 ERROR [IPC Server handler 5 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:40,347 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:40,347 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:14:40,347 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5836e2bf Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,349 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/7050f74c0058e5a7a912d72a5fd1f4fa/f/492184375b4749758634cd79283ad366, entries=703, sequenceid=3200, filesize=21.2 K 2013-07-16 17:14:40,349 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. in 137ms, sequenceid=3200, compaction requested=false 2013-07-16 17:14:40,350 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5836e2bf-0x13fe879789b0060 connected 2013-07-16 17:14:40,393 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,393 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0060 2013-07-16 17:14:40,512 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x677b5466 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,515 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x677b5466 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,518 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x677b5466-0x13fe879789b0061 connected 2013-07-16 17:14:40,546 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,547 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0061 2013-07-16 17:14:40,609 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6c017c6a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,618 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6c017c6a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,619 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6c017c6a-0x13fe879789b0062 connected 2013-07-16 17:14:40,629 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,630 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0062 2013-07-16 17:14:40,639 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:40,642 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:40,643 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:40,652 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2dd61900 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,654 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2dd61900 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,662 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2dd61900-0x13fe879789b0063 connected 2013-07-16 17:14:40,713 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,713 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0063 2013-07-16 17:14:40,761 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:40,766 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1c1cc63f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,775 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 3148 synced till here 3147 2013-07-16 17:14:40,777 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1c1cc63f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,778 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-9208206552331157571_1149{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:40,778 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1c1cc63f-0x13fe879789b0064 connected 2013-07-16 17:14:40,779 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-9208206552331157571_1149{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:40,783 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,783 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0064 2013-07-16 17:14:40,784 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994880188 with entries=213, filesize=20.4 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994880762 2013-07-16 17:14:40,785 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 1201 2013-07-16 17:14:40,785 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 whose highest sequenceid is 836 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994860103 2013-07-16 17:14:40,794 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): ba6e592748955d732d7843b9603163dc 2013-07-16 17:14:40,831 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc., current region memstore size 104.2 K 2013-07-16 17:14:40,849 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:40,865 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3b999ccc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:40,872 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3b999ccc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:40,874 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3b999ccc-0x13fe879789b0065 connected 2013-07-16 17:14:40,927 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:40,928 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0065 2013-07-16 17:14:40,937 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:40,938 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:40,947 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6995163845636909808_1155{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,949 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6995163845636909808_1155{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:40,954 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=3414, memsize=104.2 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/.tmp/a9b11ea895ac483389aa362f2dbbdca3 2013-07-16 17:14:40,961 ERROR [IPC Server handler 6 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:40,961 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:40,969 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/.tmp/a9b11ea895ac483389aa362f2dbbdca3 as hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/f/a9b11ea895ac483389aa362f2dbbdca3 2013-07-16 17:14:40,984 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/ba6e592748955d732d7843b9603163dc/f/a9b11ea895ac483389aa362f2dbbdca3, entries=635, sequenceid=3414, filesize=19.3 K 2013-07-16 17:14:40,985 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~104.2 K/106680, currentsize=0/0 for region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. in 154ms, sequenceid=3414, compaction requested=false 2013-07-16 17:14:41,106 DEBUG [Thread-595] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,163 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x18be08e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,166 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x18be08e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,167 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x18be08e-0x13fe879789b0066 connected 2013-07-16 17:14:41,173 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,174 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0066 2013-07-16 17:14:41,176 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:41,177 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:41,179 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:41,187 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x137f0ced connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,189 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x137f0ced Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,190 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x137f0ced-0x13fe879789b0067 connected 2013-07-16 17:14:41,196 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,197 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0067 2013-07-16 17:14:41,302 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:41,304 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xf8347be connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,306 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xf8347be Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,308 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xf8347be-0x13fe879789b0068 connected 2013-07-16 17:14:41,310 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_8797452235015208639_1153{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:41,311 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_8797452235015208639_1153 size 20497 2013-07-16 17:14:41,313 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994880762 with entries=209, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881302 2013-07-16 17:14:41,313 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 1412 2013-07-16 17:14:41,313 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 whose highest sequenceid is 1305 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 2013-07-16 17:14:41,316 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,316 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0068 2013-07-16 17:14:41,317 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): d29efc5b487c6ba1411a330e6ea9abfc 2013-07-16 17:14:41,317 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc., current region memstore size 115.3 K 2013-07-16 17:14:41,323 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:41,337 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7855405090741658716_1159{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,338 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7855405090741658716_1159{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,340 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=3624, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/d29efc5b487c6ba1411a330e6ea9abfc/.tmp/e617acb19794487d944c07c2fc2acd0d 2013-07-16 17:14:41,347 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/d29efc5b487c6ba1411a330e6ea9abfc/.tmp/e617acb19794487d944c07c2fc2acd0d as hdfs://localhost:43175/user/ec2-user/hbase/test/d29efc5b487c6ba1411a330e6ea9abfc/f/e617acb19794487d944c07c2fc2acd0d 2013-07-16 17:14:41,347 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:41,353 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/d29efc5b487c6ba1411a330e6ea9abfc/f/e617acb19794487d944c07c2fc2acd0d, entries=703, sequenceid=3624, filesize=21.2 K 2013-07-16 17:14:41,354 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. in 37ms, sequenceid=3624, compaction requested=false 2013-07-16 17:14:41,479 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3745dc5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,481 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3745dc5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,483 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3745dc5-0x13fe879789b0069 connected 2013-07-16 17:14:41,489 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,489 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0069 2013-07-16 17:14:41,492 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:41,494 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:41,496 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:41,503 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f7089b7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,505 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4f7089b7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,506 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4f7089b7-0x13fe879789b006a connected 2013-07-16 17:14:41,513 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,513 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006a 2013-07-16 17:14:41,522 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x12a262c1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,524 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x12a262c1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,525 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x12a262c1-0x13fe879789b006b connected 2013-07-16 17:14:41,531 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,531 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006b 2013-07-16 17:14:41,591 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:41,597 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 3565 synced till here 3564 2013-07-16 17:14:41,602 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-477773094867181922_1157{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,603 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-477773094867181922_1157{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,606 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881302 with entries=208, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881592 2013-07-16 17:14:41,606 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 1648 2013-07-16 17:14:41,606 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876009 whose highest sequenceid is 1521 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876009 2013-07-16 17:14:41,610 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 23b3aa990a7ac4e12882f9d3eca30eea 2013-07-16 17:14:41,611 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea., current region memstore size 115.3 K 2013-07-16 17:14:41,618 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:41,619 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6f5134bd connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,622 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6f5134bd Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,623 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6f5134bd-0x13fe879789b006c connected 2013-07-16 17:14:41,652 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_5001037840057299877_1163{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,654 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_5001037840057299877_1163 size 21685 2013-07-16 17:14:41,654 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,655 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006c 2013-07-16 17:14:41,656 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=3833, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/23b3aa990a7ac4e12882f9d3eca30eea/.tmp/8d62afadaf1d45ec8305dd46c74bae1f 2013-07-16 17:14:41,661 ERROR [IPC Server handler 7 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:41,661 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:41,666 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/23b3aa990a7ac4e12882f9d3eca30eea/.tmp/8d62afadaf1d45ec8305dd46c74bae1f as hdfs://localhost:43175/user/ec2-user/hbase/test/23b3aa990a7ac4e12882f9d3eca30eea/f/8d62afadaf1d45ec8305dd46c74bae1f 2013-07-16 17:14:41,673 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/23b3aa990a7ac4e12882f9d3eca30eea/f/8d62afadaf1d45ec8305dd46c74bae1f, entries=703, sequenceid=3833, filesize=21.2 K 2013-07-16 17:14:41,673 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. in 62ms, sequenceid=3833, compaction requested=false 2013-07-16 17:14:41,837 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65bb90dc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,839 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x65bb90dc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,840 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x65bb90dc-0x13fe879789b006d connected 2013-07-16 17:14:41,847 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,847 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006d 2013-07-16 17:14:41,849 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:41,850 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:41,851 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:41,861 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x63cc6540 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,863 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x63cc6540 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,865 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x63cc6540-0x13fe879789b006e connected 2013-07-16 17:14:41,871 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:41,872 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006e 2013-07-16 17:14:41,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 19281, total replicated edits: 1992 2013-07-16 17:14:41,922 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:41,935 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 3779 synced till here 3778 2013-07-16 17:14:41,940 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-3169948411840959612_1161{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,941 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-3169948411840959612_1161 size 20987 2013-07-16 17:14:41,943 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881592 with entries=214, filesize=20.5 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881922 2013-07-16 17:14:41,944 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 1882 2013-07-16 17:14:41,944 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876427 whose highest sequenceid is 1728 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876427 2013-07-16 17:14:41,947 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 072118ef6c0d2e55b3a9ef36a82f9fae 2013-07-16 17:14:41,948 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae., current region memstore size 115.3 K 2013-07-16 17:14:41,953 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:41,973 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_5944970552680778896_1167{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,974 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_5944970552680778896_1167{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:41,976 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4048, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/072118ef6c0d2e55b3a9ef36a82f9fae/.tmp/04fe583723a1492db27b98e87fabcf8e 2013-07-16 17:14:41,980 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2594aeed connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:41,991 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2594aeed Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:41,993 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2594aeed-0x13fe879789b006f connected 2013-07-16 17:14:41,996 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/072118ef6c0d2e55b3a9ef36a82f9fae/.tmp/04fe583723a1492db27b98e87fabcf8e as hdfs://localhost:43175/user/ec2-user/hbase/test/072118ef6c0d2e55b3a9ef36a82f9fae/f/04fe583723a1492db27b98e87fabcf8e 2013-07-16 17:14:42,006 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/072118ef6c0d2e55b3a9ef36a82f9fae/f/04fe583723a1492db27b98e87fabcf8e, entries=703, sequenceid=4048, filesize=21.2 K 2013-07-16 17:14:42,007 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. in 59ms, sequenceid=4048, compaction requested=false 2013-07-16 17:14:42,010 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,011 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b006f 2013-07-16 17:14:42,185 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65ce90f5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,185 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x65ce90f5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,186 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x65ce90f5-0x13fe879789b0070 connected 2013-07-16 17:14:42,195 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,195 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0070 2013-07-16 17:14:42,198 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:42,199 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:42,202 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:42,209 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3d60a70c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,212 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3d60a70c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,213 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3d60a70c-0x13fe879789b0071 connected 2013-07-16 17:14:42,221 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,221 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0071 2013-07-16 17:14:42,287 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:42,300 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8283039978904588517_1165{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:42,301 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8283039978904588517_1165 size 20497 2013-07-16 17:14:42,309 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994881922 with entries=209, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994882288 2013-07-16 17:14:42,309 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 2117 2013-07-16 17:14:42,309 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876847 whose highest sequenceid is 1938 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994876847 2013-07-16 17:14:42,312 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 093d3ef494905701450f33a487333200 2013-07-16 17:14:42,313 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,nnn,1373994853026.093d3ef494905701450f33a487333200., current region memstore size 115.3 K 2013-07-16 17:14:42,317 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:42,339 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1003cac6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,339 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1003cac6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,343 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1003cac6-0x13fe879789b0072 connected 2013-07-16 17:14:42,347 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:42,348 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_676661445840336149_1171{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:42,359 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,360 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0072 2013-07-16 17:14:42,362 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_676661445840336149_1171{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:42,366 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4258, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/093d3ef494905701450f33a487333200/.tmp/293fd034620a4632b639536b2074b792 2013-07-16 17:14:42,372 ERROR [IPC Server handler 8 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:42,372 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:42,378 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/093d3ef494905701450f33a487333200/.tmp/293fd034620a4632b639536b2074b792 as hdfs://localhost:43175/user/ec2-user/hbase/test/093d3ef494905701450f33a487333200/f/293fd034620a4632b639536b2074b792 2013-07-16 17:14:42,386 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/093d3ef494905701450f33a487333200/f/293fd034620a4632b639536b2074b792, entries=703, sequenceid=4258, filesize=21.2 K 2013-07-16 17:14:42,386 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,nnn,1373994853026.093d3ef494905701450f33a487333200. in 73ms, sequenceid=4258, compaction requested=false 2013-07-16 17:14:42,527 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x19c21cf9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,529 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x19c21cf9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,530 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x19c21cf9-0x13fe879789b0073 connected 2013-07-16 17:14:42,540 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,540 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0073 2013-07-16 17:14:42,543 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:42,543 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:42,545 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:42,551 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x45937ecf connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,553 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x45937ecf Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,554 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x45937ecf-0x13fe879789b0074 connected 2013-07-16 17:14:42,579 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,579 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0074 2013-07-16 17:14:42,612 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:42,626 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_6779166700190855744_1169{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 20436 2013-07-16 17:14:42,626 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_6779166700190855744_1169 size 20436 2013-07-16 17:14:42,685 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7ea9acff connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,688 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7ea9acff Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,689 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7ea9acff-0x13fe879789b0075 connected 2013-07-16 17:14:42,696 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,696 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0075 2013-07-16 17:14:42,886 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3c007fe5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,889 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3c007fe5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,890 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3c007fe5-0x13fe879789b0076 connected 2013-07-16 17:14:42,897 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,898 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0076 2013-07-16 17:14:42,901 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:42,903 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: IOException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:42,905 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: IOException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:42,912 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4a35e00c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:42,914 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4a35e00c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:42,915 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4a35e00c-0x13fe879789b0077 connected 2013-07-16 17:14:42,922 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:42,922 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0077 2013-07-16 17:14:43,027 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6e61a2e3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,029 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6e61a2e3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,029 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994882288 with entries=208, filesize=20.0 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994882612 2013-07-16 17:14:43,030 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 1 hlogs to remove out of total 12; oldest outstanding sequenceid is 2353 2013-07-16 17:14:43,030 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877430 whose highest sequenceid is 2151 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877430 2013-07-16 17:14:43,031 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6e61a2e3-0x13fe879789b0078 connected 2013-07-16 17:14:43,034 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): c7ae28d709ff479c3e4baad82cd99ca0 2013-07-16 17:14:43,034 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0., current region memstore size 115.3 K 2013-07-16 17:14:43,038 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:43,040 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,041 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0078 2013-07-16 17:14:43,043 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:43,044 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:43,058 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_757296047781477477_1175{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:43,059 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_757296047781477477_1175{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:43,061 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4467, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/c7ae28d709ff479c3e4baad82cd99ca0/.tmp/fc70f62f766f46d9b39eeecde3250db1 2013-07-16 17:14:43,066 ERROR [IPC Server handler 9 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:43,067 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 24 more 2013-07-16 17:14:43,072 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/c7ae28d709ff479c3e4baad82cd99ca0/.tmp/fc70f62f766f46d9b39eeecde3250db1 as hdfs://localhost:43175/user/ec2-user/hbase/test/c7ae28d709ff479c3e4baad82cd99ca0/f/fc70f62f766f46d9b39eeecde3250db1 2013-07-16 17:14:43,079 ERROR [IPC Server handler 0 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:43,079 WARN [Thread-159] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1382) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:454) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:428) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:64) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:246) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:14:43,082 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/c7ae28d709ff479c3e4baad82cd99ca0/f/fc70f62f766f46d9b39eeecde3250db1, entries=703, sequenceid=4467, filesize=21.2 K 2013-07-16 17:14:43,082 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. in 48ms, sequenceid=4467, compaction requested=false 2013-07-16 17:14:43,204 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x502c186 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,210 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x502c186 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,212 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x502c186-0x13fe879789b0079 connected 2013-07-16 17:14:43,215 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,216 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0079 2013-07-16 17:14:43,218 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:43,219 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:43,220 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:43,226 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x50fc1387 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,228 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x50fc1387 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,230 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x50fc1387-0x13fe879789b007a connected 2013-07-16 17:14:43,241 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,241 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007a 2013-07-16 17:14:43,246 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x899cd30 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,249 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x899cd30 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,250 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x899cd30-0x13fe879789b007b connected 2013-07-16 17:14:43,256 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,257 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007b 2013-07-16 17:14:43,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:43,315 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:43,322 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 4406 synced till here 4405 2013-07-16 17:14:43,325 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7721758757633514432_1173{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:43,325 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7721758757633514432_1173 size 20664 2013-07-16 17:14:43,327 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994882612 with entries=210, filesize=20.2 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994883315 2013-07-16 17:14:43,328 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(589): Found 2 hlogs to remove out of total 12; oldest outstanding sequenceid is 2588 2013-07-16 17:14:43,328 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877772 whose highest sequenceid is 2358 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994877772 2013-07-16 17:14:43,331 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(695): moving old hlog file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878224 whose highest sequenceid is 2567 to /user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994878224 2013-07-16 17:14:43,346 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x30cbd684 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:43,357 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x30cbd684 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,359 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,359 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x30cbd684-0x13fe879789b007c connected 2013-07-16 17:14:43,359 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007c 2013-07-16 17:14:43,562 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3a215406 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,565 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3a215406 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,566 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3a215406-0x13fe879789b007d connected 2013-07-16 17:14:43,573 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,573 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007d 2013-07-16 17:14:43,576 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:43,576 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:43,578 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:43,585 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2a383a99 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,587 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2a383a99 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,588 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2a383a99-0x13fe879789b007e connected 2013-07-16 17:14:43,590 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(90): HLog roll requested 2013-07-16 17:14:43,595 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,596 DEBUG [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(658): cleanupCurrentWriter waiting for transactions to get synced total 4616 synced till here 4615 2013-07-16 17:14:43,596 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007e 2013-07-16 17:14:43,598 INFO [IPC Server handler 4 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_253971028549120570_1177{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:14:43,599 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_253971028549120570_1177 size 20595 2013-07-16 17:14:43,600 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(523): Rolled WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994883315 with entries=210, filesize=20.1 K; new WAL /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994883590 2013-07-16 17:14:43,601 INFO [RS:0;ip-10-197-55-49:49041.logRoller] wal.FSHLog(636): Too many hlogs: logs=11, maxlogs=10; forcing flush of 1 regions(s): 8316cb643e8db1f47659c2704a5d85bd 2013-07-16 17:14:43,602 DEBUG [Thread-159] regionserver.HRegion(1492): Started memstore flush for test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd., current region memstore size 115.3 K 2013-07-16 17:14:43,605 DEBUG [Thread-159] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:43,616 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_2707427672490228109_1181{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:43,617 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_2707427672490228109_1181{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:14:43,618 INFO [Thread-159] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4888, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/8316cb643e8db1f47659c2704a5d85bd/.tmp/57625d08fe1b4d4d81fa0332820d1970 2013-07-16 17:14:43,626 DEBUG [Thread-159] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/8316cb643e8db1f47659c2704a5d85bd/.tmp/57625d08fe1b4d4d81fa0332820d1970 as hdfs://localhost:43175/user/ec2-user/hbase/test/8316cb643e8db1f47659c2704a5d85bd/f/57625d08fe1b4d4d81fa0332820d1970 2013-07-16 17:14:43,634 INFO [Thread-159] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/8316cb643e8db1f47659c2704a5d85bd/f/57625d08fe1b4d4d81fa0332820d1970, entries=703, sequenceid=4888, filesize=21.2 K 2013-07-16 17:14:43,634 INFO [Thread-159] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. in 32ms, sequenceid=4888, compaction requested=false 2013-07-16 17:14:43,700 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4efb8d7a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,702 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4efb8d7a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,703 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4efb8d7a-0x13fe879789b007f connected 2013-07-16 17:14:43,709 INFO [Thread-595] replication.TestReplicationQueueFailover(63): Done loading table 2013-07-16 17:14:43,709 INFO [Thread-595] replication.TestReplicationQueueFailover(66): Done waiting for threads 2013-07-16 17:14:43,709 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,709 WARN [Thread-595] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:43,710 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b007f 2013-07-16 17:14:43,710 WARN [Thread-595] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:43,711 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=0, numRetries=35, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:43,711 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1097): Removed all cached region locations that map to ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 2013-07-16 17:14:43,879 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x20cff041 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,881 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x20cff041 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,882 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x20cff041-0x13fe879789b0080 connected 2013-07-16 17:14:43,888 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,889 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0080 2013-07-16 17:14:43,891 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:43,891 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:43,893 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:43,898 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2f03ac05 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,901 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2f03ac05 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,903 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2f03ac05-0x13fe879789b0081 connected 2013-07-16 17:14:43,909 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,909 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0081 2013-07-16 17:14:43,917 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x29663452 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:43,919 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x29663452 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:43,920 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x29663452-0x13fe879789b0082 connected 2013-07-16 17:14:43,929 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:43,929 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0082 2013-07-16 17:14:43,941 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 64c33257daeacd0fe5bf6a175319eadb, NAME => 'test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.', STARTKEY => '', ENDKEY => 'bbb'} 2013-07-16 17:14:43,941 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'bbb' 2013-07-16 17:14:43,958 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 8ad63e6b6a48baaedae6985e87d53061, NAME => 'test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.', STARTKEY => 'bbb', ENDKEY => 'ccc'} 2013-07-16 17:14:43,958 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ccc' 2013-07-16 17:14:43,984 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => f4cfa4d251af617b31eb11c76cc68678, NAME => 'test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.', STARTKEY => 'ccc', ENDKEY => 'ddd'} 2013-07-16 17:14:43,984 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ddd' 2013-07-16 17:14:44,002 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => d88c6958af6ef781dd9834d0369f4f70, NAME => 'test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.', STARTKEY => 'ddd', ENDKEY => 'eee'} 2013-07-16 17:14:44,003 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'eee' 2013-07-16 17:14:44,025 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x35c21f94 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,034 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x35c21f94 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,036 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x35c21f94-0x13fe879789b0083 connected 2013-07-16 17:14:44,043 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => b9cbc55dd9bcb588274e2598633563b2, NAME => 'test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.', STARTKEY => 'eee', ENDKEY => 'fff'} 2013-07-16 17:14:44,043 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'fff' 2013-07-16 17:14:44,044 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,047 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0083 2013-07-16 17:14:44,070 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 7050f74c0058e5a7a912d72a5fd1f4fa, NAME => 'test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.', STARTKEY => 'fff', ENDKEY => 'ggg'} 2013-07-16 17:14:44,070 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ggg' 2013-07-16 17:14:44,090 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => d3ed59de1135ee985829ee3cbad0cee2, NAME => 'test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.', STARTKEY => 'ggg', ENDKEY => 'hhh'} 2013-07-16 17:14:44,091 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'hhh' 2013-07-16 17:14:44,113 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 2fd443c241020be67cc0d08d473f5134, NAME => 'test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.', STARTKEY => 'hhh', ENDKEY => 'iii'} 2013-07-16 17:14:44,113 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'iii' 2013-07-16 17:14:44,145 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 55d7e62280245f719c8f2cc61c586c64, NAME => 'test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.', STARTKEY => 'iii', ENDKEY => 'jjj'} 2013-07-16 17:14:44,145 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'jjj' 2013-07-16 17:14:44,165 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => ba6e592748955d732d7843b9603163dc, NAME => 'test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.', STARTKEY => 'jjj', ENDKEY => 'kkk'} 2013-07-16 17:14:44,166 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'kkk' 2013-07-16 17:14:44,189 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => d29efc5b487c6ba1411a330e6ea9abfc, NAME => 'test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.', STARTKEY => 'kkk', ENDKEY => 'lll'} 2013-07-16 17:14:44,189 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'lll' 2013-07-16 17:14:44,207 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 23b3aa990a7ac4e12882f9d3eca30eea, NAME => 'test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.', STARTKEY => 'lll', ENDKEY => 'mmm'} 2013-07-16 17:14:44,207 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'mmm' 2013-07-16 17:14:44,223 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 072118ef6c0d2e55b3a9ef36a82f9fae, NAME => 'test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.', STARTKEY => 'mmm', ENDKEY => 'nnn'} 2013-07-16 17:14:44,223 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'nnn' 2013-07-16 17:14:44,239 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x10c5aa0c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,240 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 093d3ef494905701450f33a487333200, NAME => 'test,nnn,1373994853026.093d3ef494905701450f33a487333200.', STARTKEY => 'nnn', ENDKEY => 'ooo'} 2013-07-16 17:14:44,240 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ooo' 2013-07-16 17:14:44,255 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x10c5aa0c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,256 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x10c5aa0c-0x13fe879789b0084 connected 2013-07-16 17:14:44,272 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => c7ae28d709ff479c3e4baad82cd99ca0, NAME => 'test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.', STARTKEY => 'ooo', ENDKEY => 'ppp'} 2013-07-16 17:14:44,272 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ppp' 2013-07-16 17:14:44,277 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,277 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0084 2013-07-16 17:14:44,283 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:44,285 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:44,288 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:44,292 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 8316cb643e8db1f47659c2704a5d85bd, NAME => 'test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.', STARTKEY => 'ppp', ENDKEY => 'qqq'} 2013-07-16 17:14:44,292 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'qqq' 2013-07-16 17:14:44,311 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x68c34b0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,312 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 930e643b6dd6efc74f14deb95249db91, NAME => 'test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.', STARTKEY => 'qqq', ENDKEY => 'rrr'} 2013-07-16 17:14:44,312 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'rrr' 2013-07-16 17:14:44,317 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x68c34b0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,320 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x68c34b0-0x13fe879789b0085 connected 2013-07-16 17:14:44,325 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => c4611b71a935e3b170cd961ded7d0820, NAME => 'test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.', STARTKEY => 'rrr', ENDKEY => 'sss'} 2013-07-16 17:14:44,325 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'sss' 2013-07-16 17:14:44,337 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 287928895932801d51170fb202253eac, NAME => 'test,sss,1373994853027.287928895932801d51170fb202253eac.', STARTKEY => 'sss', ENDKEY => 'ttt'} 2013-07-16 17:14:44,337 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'ttt' 2013-07-16 17:14:44,338 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,339 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0085 2013-07-16 17:14:44,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:44,348 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 4ac8676e6af9c1c25f2f2a90ed99d3ae, NAME => 'test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.', STARTKEY => 'ttt', ENDKEY => 'uuu'} 2013-07-16 17:14:44,348 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'uuu' 2013-07-16 17:14:44,358 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => f8146b196ac3399ee0b4bd5a227bd634, NAME => 'test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.', STARTKEY => 'uuu', ENDKEY => 'vvv'} 2013-07-16 17:14:44,358 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'vvv' 2013-07-16 17:14:44,368 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 253df35786418e184ed944fb4881aa4b, NAME => 'test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.', STARTKEY => 'vvv', ENDKEY => 'www'} 2013-07-16 17:14:44,368 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'www' 2013-07-16 17:14:44,376 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => baee7b76d51e7196ee3121edc50bda59, NAME => 'test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.', STARTKEY => 'www', ENDKEY => 'xxx'} 2013-07-16 17:14:44,376 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'xxx' 2013-07-16 17:14:44,386 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 7dde26b51ab247338eaa8d5e372498e9, NAME => 'test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.', STARTKEY => 'xxx', ENDKEY => 'yyy'} 2013-07-16 17:14:44,387 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'yyy' 2013-07-16 17:14:44,395 DEBUG [Thread-595] client.ClientScanner(204): Finished with region {ENCODED => 6ca2c5a98917cab87c982b4bbb7e0115, NAME => 'test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.', STARTKEY => 'yyy', ENDKEY => 'zzz'} 2013-07-16 17:14:44,395 DEBUG [Thread-595] client.ClientScanner(212): Advancing internal scanner to startKey at 'zzz' 2013-07-16 17:14:44,397 DEBUG [Thread-595] client.ClientScanner(198): Finished region={ENCODED => 38600084dc094d719e5c6033fca5452b, NAME => 'test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.', STARTKEY => 'zzz', ENDKEY => ''} 2013-07-16 17:14:44,399 WARN [Thread-595] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:44,399 WARN [Thread-595] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:44,400 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=0, numRetries=6, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:44,400 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1097): Removed all cached region locations that map to ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:44,446 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x15ca7a5b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,449 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x15ca7a5b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,450 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x15ca7a5b-0x13fe879789b0086 connected 2013-07-16 17:14:44,457 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,457 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0086 2013-07-16 17:14:44,584 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6088d0fe connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,586 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6088d0fe Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,588 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6088d0fe-0x13fe879789b0087 connected 2013-07-16 17:14:44,593 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,594 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0087 2013-07-16 17:14:44,596 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:44,597 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:44,599 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:44,604 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=1, numRetries=6, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:44,604 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x273eb73c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,606 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x273eb73c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,607 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x273eb73c-0x13fe879789b0088 connected 2013-07-16 17:14:44,613 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,614 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0088 2013-07-16 17:14:44,662 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x31450e67 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,664 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x31450e67 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,665 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x31450e67-0x13fe879789b0089 connected 2013-07-16 17:14:44,671 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,671 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0089 2013-07-16 17:14:44,718 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xf2f8f5f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,720 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xf2f8f5f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,721 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xf2f8f5f-0x13fe879789b008a connected 2013-07-16 17:14:44,727 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,727 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008a 2013-07-16 17:14:44,908 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1069): Removed ip-10-197-55-49.us-west-1.compute.internal:39939 as a location of test,,1373994855276.f3fce37071716f89a509124ef3fd1288. for tableName=test from cache 2013-07-16 17:14:44,910 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=2, numRetries=6, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:44,932 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x46cf828e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,934 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x46cf828e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,935 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x46cf828e-0x13fe879789b008b connected 2013-07-16 17:14:44,941 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,941 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008b 2013-07-16 17:14:44,977 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6b3e773f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:44,979 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6b3e773f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:44,980 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6b3e773f-0x13fe879789b008c connected 2013-07-16 17:14:44,986 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:44,986 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008c 2013-07-16 17:14:44,990 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:44,990 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:44,992 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:44,998 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x55dbc59b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,000 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x55dbc59b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,001 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x55dbc59b-0x13fe879789b008d connected 2013-07-16 17:14:45,007 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,008 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008d 2013-07-16 17:14:45,113 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a3843d4 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,120 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1a3843d4 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,121 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1a3843d4-0x13fe879789b008e connected 2013-07-16 17:14:45,128 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,128 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008e 2013-07-16 17:14:45,132 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:45,132 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:45,248 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7a0f8249 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,251 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7a0f8249 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,252 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7a0f8249-0x13fe879789b008f connected 2013-07-16 17:14:45,259 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,259 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b008f 2013-07-16 17:14:45,262 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:45,262 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:45,264 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:45,269 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5e93fc0d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,271 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5e93fc0d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,272 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5e93fc0d-0x13fe879789b0090 connected 2013-07-16 17:14:45,278 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,279 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0090 2013-07-16 17:14:45,334 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x556e3764 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,337 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x556e3764 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,338 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x556e3764-0x13fe879789b0091 connected 2013-07-16 17:14:45,345 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,346 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0091 2013-07-16 17:14:45,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:45,384 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1739e7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,387 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1739e7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,388 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1739e7-0x13fe879789b0092 connected 2013-07-16 17:14:45,395 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,395 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0092 2013-07-16 17:14:45,414 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1069): Removed ip-10-197-55-49.us-west-1.compute.internal:39939 as a location of test,,1373994855276.f3fce37071716f89a509124ef3fd1288. for tableName=test from cache 2013-07-16 17:14:45,416 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=3, numRetries=6, retryTime=-1018ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:39939 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:45,601 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6197fbbc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,603 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6197fbbc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,604 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6197fbbc-0x13fe879789b0093 connected 2013-07-16 17:14:45,610 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,611 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0093 2013-07-16 17:14:45,653 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x712175f2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,654 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x712175f2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,656 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x712175f2-0x13fe879789b0094 connected 2013-07-16 17:14:45,662 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,662 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0094 2013-07-16 17:14:45,666 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:45,666 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:45,668 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:45,675 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3164ed23 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,677 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3164ed23 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,678 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3164ed23-0x13fe879789b0095 connected 2013-07-16 17:14:45,687 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,687 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0095 2013-07-16 17:14:45,868 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2b0671ba connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,869 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2b0671ba Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,871 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2b0671ba-0x13fe879789b0096 connected 2013-07-16 17:14:45,879 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,880 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0096 2013-07-16 17:14:45,918 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x63f4ad66 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,920 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x63f4ad66 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,922 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x63f4ad66-0x13fe879789b0097 connected 2013-07-16 17:14:45,928 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,928 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0097 2013-07-16 17:14:45,930 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:45,931 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:45,933 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:45,939 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4457ffb2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:45,940 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4457ffb2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:45,942 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4457ffb2-0x13fe879789b0098 connected 2013-07-16 17:14:45,949 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:45,949 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0098 2013-07-16 17:14:46,054 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x30824336 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,056 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x30824336 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,057 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x30824336-0x13fe879789b0099 connected 2013-07-16 17:14:46,063 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,063 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0099 2013-07-16 17:14:46,085 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x79de7e39 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,088 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x79de7e39 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,089 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x79de7e39-0x13fe879789b009a connected 2013-07-16 17:14:46,096 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,096 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009a 2013-07-16 17:14:46,269 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x14e3c50c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,272 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x14e3c50c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,273 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x14e3c50c-0x13fe879789b009b connected 2013-07-16 17:14:46,279 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,279 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009b 2013-07-16 17:14:46,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:46,403 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x35001fd2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,405 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x35001fd2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,406 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x35001fd2-0x13fe879789b009c connected 2013-07-16 17:14:46,411 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,412 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009c 2013-07-16 17:14:46,414 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:46,415 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:46,417 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:46,423 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x33d8f60 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,425 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x33d8f60 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,426 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x33d8f60-0x13fe879789b009d connected 2013-07-16 17:14:46,427 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1069): Removed ip-10-197-55-49.us-west-1.compute.internal:39939 as a location of test,,1373994855276.f3fce37071716f89a509124ef3fd1288. for tableName=test from cache 2013-07-16 17:14:46,428 WARN [Thread-595] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:46,429 WARN [Thread-595] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:46,429 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=4, numRetries=6, retryTime=-2031ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:46,430 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1097): Removed all cached region locations that map to ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:46,433 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,433 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009d 2013-07-16 17:14:46,542 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1b7f955b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,543 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1b7f955b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,546 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1b7f955b-0x13fe879789b009e connected 2013-07-16 17:14:46,557 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,558 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009e 2013-07-16 17:14:46,590 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1be9cc0e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,593 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1be9cc0e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,594 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1be9cc0e-0x13fe879789b009f connected 2013-07-16 17:14:46,604 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,605 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b009f 2013-07-16 17:14:46,608 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:46,608 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:46,613 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:46,619 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x19fef64 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,621 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x19fef64 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,622 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x19fef64-0x13fe879789b00a0 connected 2013-07-16 17:14:46,631 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,631 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a0 2013-07-16 17:14:46,736 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x689258c7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,739 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x689258c7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,741 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x689258c7-0x13fe879789b00a1 connected 2013-07-16 17:14:46,752 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,753 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a1 2013-07-16 17:14:46,764 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x8b5441b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,767 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x8b5441b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,768 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x8b5441b-0x13fe879789b00a2 connected 2013-07-16 17:14:46,774 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,775 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a2 2013-07-16 17:14:46,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 24281, total replicated edits: 1992 2013-07-16 17:14:46,958 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x256c711 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:46,961 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x256c711 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:46,963 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x256c711-0x13fe879789b00a3 connected 2013-07-16 17:14:46,969 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:46,970 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a3 2013-07-16 17:14:47,081 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6b364c80 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,083 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6b364c80 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,084 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6b364c80-0x13fe879789b00a4 connected 2013-07-16 17:14:47,091 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,091 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a4 2013-07-16 17:14:47,094 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:47,094 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:47,095 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:47,102 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6f07524 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,104 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6f07524 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,105 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6f07524-0x13fe879789b00a5 connected 2013-07-16 17:14:47,111 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,112 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a5 2013-07-16 17:14:47,217 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4308d92c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,219 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4308d92c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,220 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4308d92c-0x13fe879789b00a6 connected 2013-07-16 17:14:47,226 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,226 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a6 2013-07-16 17:14:47,229 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:47,229 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:47,276 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7f4d4b79 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,279 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7f4d4b79 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,280 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7f4d4b79-0x13fe879789b00a7 connected 2013-07-16 17:14:47,289 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,289 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a7 2013-07-16 17:14:47,292 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:47,292 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:47,293 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:47,299 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1d5e8f0d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,301 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1d5e8f0d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,303 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1d5e8f0d-0x13fe879789b00a8 connected 2013-07-16 17:14:47,312 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,313 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a8 2013-07-16 17:14:47,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:47,417 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7a1064df connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,419 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7a1064df Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,421 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7a1064df-0x13fe879789b00a9 connected 2013-07-16 17:14:47,427 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,428 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00a9 2013-07-16 17:14:47,433 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x457095e5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,435 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x457095e5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,436 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x457095e5-0x13fe879789b00aa connected 2013-07-16 17:14:47,442 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,442 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00aa 2013-07-16 17:14:47,633 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5fdfe483 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,635 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5fdfe483 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,636 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5fdfe483-0x13fe879789b00ab connected 2013-07-16 17:14:47,644 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,644 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ab 2013-07-16 17:14:47,749 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x260b493 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,752 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x260b493 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,753 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x260b493-0x13fe879789b00ac connected 2013-07-16 17:14:47,763 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,763 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ac 2013-07-16 17:14:47,768 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:47,769 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:47,769 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:47,782 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x73e2d982 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,783 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x73e2d982 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,784 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x73e2d982-0x13fe879789b00ad connected 2013-07-16 17:14:47,791 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,792 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ad 2013-07-16 17:14:47,896 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4cfcc93c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,898 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4cfcc93c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,900 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4cfcc93c-0x13fe879789b00ae connected 2013-07-16 17:14:47,906 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,906 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ae 2013-07-16 17:14:47,949 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x24f5de4e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,951 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x24f5de4e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,953 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x24f5de4e-0x13fe879789b00af connected 2013-07-16 17:14:47,958 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,959 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00af 2013-07-16 17:14:47,961 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:47,961 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:47,962 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:47,970 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5b47f8aa connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:47,973 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5b47f8aa Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:47,974 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5b47f8aa-0x13fe879789b00b0 connected 2013-07-16 17:14:47,980 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:47,980 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b0 2013-07-16 17:14:48,085 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x64db8258 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,087 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x64db8258 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,088 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x64db8258-0x13fe879789b00b1 connected 2013-07-16 17:14:48,094 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,094 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b1 2013-07-16 17:14:48,110 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5e540696 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,112 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5e540696 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,114 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5e540696-0x13fe879789b00b2 connected 2013-07-16 17:14:48,120 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,120 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b2 2013-07-16 17:14:48,300 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1367d679 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,302 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1367d679 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,303 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1367d679-0x13fe879789b00b3 connected 2013-07-16 17:14:48,310 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,311 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b3 2013-07-16 17:14:48,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:48,348 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:48,429 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x70b5cd50 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,431 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x70b5cd50 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,432 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x70b5cd50-0x13fe879789b00b4 connected 2013-07-16 17:14:48,438 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,438 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b4 2013-07-16 17:14:48,441 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:48,442 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:48,443 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:48,448 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x426613f1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,450 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x426613f1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,451 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x426613f1-0x13fe879789b00b5 connected 2013-07-16 17:14:48,457 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,457 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b5 2013-07-16 17:14:48,562 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x35229eea connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,564 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x35229eea Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,565 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x35229eea-0x13fe879789b00b6 connected 2013-07-16 17:14:48,571 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,571 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b6 2013-07-16 17:14:48,615 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x878203a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,616 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x878203a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,618 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x878203a-0x13fe879789b00b7 connected 2013-07-16 17:14:48,623 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,624 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b7 2013-07-16 17:14:48,626 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:48,626 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:48,628 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:48,632 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x67296cd0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,634 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x67296cd0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,635 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x67296cd0-0x13fe879789b00b8 connected 2013-07-16 17:14:48,641 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,641 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b8 2013-07-16 17:14:48,746 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x62606f90 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,747 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x62606f90 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,749 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x62606f90-0x13fe879789b00b9 connected 2013-07-16 17:14:48,755 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,755 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00b9 2013-07-16 17:14:48,776 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2bd28182 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,778 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2bd28182 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,779 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2bd28182-0x13fe879789b00ba connected 2013-07-16 17:14:48,785 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,785 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ba 2013-07-16 17:14:48,959 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x50f85ff6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:48,961 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x50f85ff6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:48,962 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x50f85ff6-0x13fe879789b00bb connected 2013-07-16 17:14:48,968 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:48,969 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00bb 2013-07-16 17:14:49,091 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x61285ffb connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,093 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x61285ffb Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,094 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x61285ffb-0x13fe879789b00bc connected 2013-07-16 17:14:49,100 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,100 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00bc 2013-07-16 17:14:49,104 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:49,105 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:49,106 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:49,111 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x33d9d590 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,113 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x33d9d590 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,114 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x33d9d590-0x13fe879789b00bd connected 2013-07-16 17:14:49,120 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,120 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00bd 2013-07-16 17:14:49,225 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x267279fd connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,227 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x267279fd Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,229 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x267279fd-0x13fe879789b00be connected 2013-07-16 17:14:49,235 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,235 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00be 2013-07-16 17:14:49,238 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:49,238 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:49,273 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3815ac99 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,275 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3815ac99 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,276 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3815ac99-0x13fe879789b00bf connected 2013-07-16 17:14:49,282 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,282 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00bf 2013-07-16 17:14:49,284 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:49,285 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:49,286 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:49,290 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5eaa596d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,292 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5eaa596d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,293 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5eaa596d-0x13fe879789b00c0 connected 2013-07-16 17:14:49,299 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,300 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c0 2013-07-16 17:14:49,349 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:49,404 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3d2e7406 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,406 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3d2e7406 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,407 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3d2e7406-0x13fe879789b00c1 connected 2013-07-16 17:14:49,414 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,415 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c1 2013-07-16 17:14:49,441 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2e8b3bdd connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,443 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2e8b3bdd Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,445 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2e8b3bdd-0x13fe879789b00c2 connected 2013-07-16 17:14:49,450 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,450 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c2 2013-07-16 17:14:49,620 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x53f88a1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,622 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x53f88a1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,623 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x53f88a1-0x13fe879789b00c3 connected 2013-07-16 17:14:49,630 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,630 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c3 2013-07-16 17:14:49,757 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x446728c7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,759 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x446728c7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,760 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x446728c7-0x13fe879789b00c4 connected 2013-07-16 17:14:49,766 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,766 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c4 2013-07-16 17:14:49,769 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:49,770 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:49,771 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:49,776 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6c74de6e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,778 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6c74de6e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,779 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6c74de6e-0x13fe879789b00c5 connected 2013-07-16 17:14:49,785 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,785 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c5 2013-07-16 17:14:49,890 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x154b66ce connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,892 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x154b66ce Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,893 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x154b66ce-0x13fe879789b00c6 connected 2013-07-16 17:14:49,900 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,900 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c6 2013-07-16 17:14:49,934 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f964d62 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,936 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4f964d62 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,937 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4f964d62-0x13fe879789b00c7 connected 2013-07-16 17:14:49,944 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,945 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c7 2013-07-16 17:14:49,947 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:49,947 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:49,948 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:49,952 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x335dcd0c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:49,955 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x335dcd0c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:49,956 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x335dcd0c-0x13fe879789b00c8 connected 2013-07-16 17:14:49,961 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:49,962 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c8 2013-07-16 17:14:50,066 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x34d28e76 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,069 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x34d28e76 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,070 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x34d28e76-0x13fe879789b00c9 connected 2013-07-16 17:14:50,076 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,076 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00c9 2013-07-16 17:14:50,105 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5feed5f2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,106 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5feed5f2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,108 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5feed5f2-0x13fe879789b00ca connected 2013-07-16 17:14:50,113 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,113 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ca 2013-07-16 17:14:50,280 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6337bb9c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,282 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6337bb9c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,283 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6337bb9c-0x13fe879789b00cb connected 2013-07-16 17:14:50,289 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,289 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00cb 2013-07-16 17:14:50,349 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:50,419 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6e55f58 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,422 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6e55f58 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,423 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6e55f58-0x13fe879789b00cc connected 2013-07-16 17:14:50,429 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,429 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00cc 2013-07-16 17:14:50,432 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:50,432 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:50,433 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:50,438 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xbea35db connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,440 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xbea35db Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,441 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xbea35db-0x13fe879789b00cd connected 2013-07-16 17:14:50,449 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,450 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00cd 2013-07-16 17:14:50,555 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x654029a9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,557 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x654029a9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,558 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x654029a9-0x13fe879789b00ce connected 2013-07-16 17:14:50,571 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,572 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ce 2013-07-16 17:14:50,595 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7f7951f3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,597 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7f7951f3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,598 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7f7951f3-0x13fe879789b00cf connected 2013-07-16 17:14:50,603 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,604 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00cf 2013-07-16 17:14:50,606 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:50,606 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:50,607 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:50,611 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7740c621 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,613 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7740c621 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,614 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7740c621-0x13fe879789b00d0 connected 2013-07-16 17:14:50,620 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,620 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d0 2013-07-16 17:14:50,725 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7a79e5e7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,728 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7a79e5e7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,730 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7a79e5e7-0x13fe879789b00d1 connected 2013-07-16 17:14:50,739 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,739 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d1 2013-07-16 17:14:50,779 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7ee28774 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,783 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7ee28774 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,785 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7ee28774-0x13fe879789b00d2 connected 2013-07-16 17:14:50,794 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,795 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d2 2013-07-16 17:14:50,944 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x31c73ad5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:50,949 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x31c73ad5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:50,951 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x31c73ad5-0x13fe879789b00d3 connected 2013-07-16 17:14:50,959 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:50,960 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d3 2013-07-16 17:14:51,102 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5b1e31c0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,104 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5b1e31c0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,105 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5b1e31c0-0x13fe879789b00d4 connected 2013-07-16 17:14:51,111 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,112 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d4 2013-07-16 17:14:51,114 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:51,115 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:51,116 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:51,121 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6b61c3a0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,123 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6b61c3a0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,124 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6b61c3a0-0x13fe879789b00d5 connected 2013-07-16 17:14:51,131 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,131 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d5 2013-07-16 17:14:51,236 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6eedf759 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,238 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6eedf759 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,240 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6eedf759-0x13fe879789b00d6 connected 2013-07-16 17:14:51,246 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,246 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d6 2013-07-16 17:14:51,249 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:51,249 WARN [hbase-repl-pool-16-thread-2] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:51,267 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6a835dc6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,269 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6a835dc6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,271 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6a835dc6-0x13fe879789b00d7 connected 2013-07-16 17:14:51,282 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,283 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d7 2013-07-16 17:14:51,286 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:51,286 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:51,287 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:51,297 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2025e45d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,299 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2025e45d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,301 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2025e45d-0x13fe879789b00d8 connected 2013-07-16 17:14:51,313 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,313 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d8 2013-07-16 17:14:51,349 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:51,419 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x456f2c51 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,421 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x456f2c51 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,422 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x456f2c51-0x13fe879789b00d9 connected 2013-07-16 17:14:51,429 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,431 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00d9 2013-07-16 17:14:51,453 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7ecc938c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,456 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7ecc938c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,457 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7ecc938c-0x13fe879789b00da connected 2013-07-16 17:14:51,465 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,466 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00da 2013-07-16 17:14:51,637 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6d3666fb connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,642 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6d3666fb Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,643 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6d3666fb-0x13fe879789b00db connected 2013-07-16 17:14:51,658 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,658 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00db 2013-07-16 17:14:51,772 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f621103 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,777 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4f621103 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,778 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4f621103-0x13fe879789b00dc connected 2013-07-16 17:14:51,790 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,790 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00dc 2013-07-16 17:14:51,793 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:51,794 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:51,795 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:51,802 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x134b5a7b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,806 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x134b5a7b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,807 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x134b5a7b-0x13fe879789b00dd connected 2013-07-16 17:14:51,818 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,818 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00dd 2013-07-16 17:14:51,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 29281, total replicated edits: 1992 2013-07-16 17:14:51,934 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5b0aeba9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,936 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5b0aeba9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,937 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5b0aeba9-0x13fe879789b00de connected 2013-07-16 17:14:51,946 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,946 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00de 2013-07-16 17:14:51,975 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6a4bf202 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:51,977 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6a4bf202 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:51,978 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6a4bf202-0x13fe879789b00df connected 2013-07-16 17:14:51,989 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:51,989 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00df 2013-07-16 17:14:51,992 WARN [hbase-repl-pool-16-thread-3] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:51,994 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:51,995 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:52,002 INFO [RpcServer.handler=1,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x50f9ccb8 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,004 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x50f9ccb8 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,005 DEBUG [RpcServer.handler=1,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x50f9ccb8-0x13fe879789b00e0 connected 2013-07-16 17:14:52,015 DEBUG [RpcServer.handler=1,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,016 INFO [RpcServer.handler=1,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e0 2013-07-16 17:14:52,129 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1c56295f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,135 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1c56295f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,138 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1c56295f-0x13fe879789b00e1 connected 2013-07-16 17:14:52,146 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,146 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e1 2013-07-16 17:14:52,152 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7fa52d2f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,155 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7fa52d2f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,157 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7fa52d2f-0x13fe879789b00e2 connected 2013-07-16 17:14:52,169 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,169 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e2 2013-07-16 17:14:52,350 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:52,353 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x41dd9340 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,355 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x41dd9340 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,357 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x41dd9340-0x13fe879789b00e3 connected 2013-07-16 17:14:52,363 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,364 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e3 2013-07-16 17:14:52,485 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1368d2e7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,486 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1368d2e7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,488 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1368d2e7-0x13fe879789b00e4 connected 2013-07-16 17:14:52,508 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,508 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e4 2013-07-16 17:14:52,512 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:52,512 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:52,513 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:52,520 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x446b852 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,528 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x446b852 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,531 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x446b852-0x13fe879789b00e5 connected 2013-07-16 17:14:52,546 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,547 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e5 2013-07-16 17:14:52,656 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x453bb109 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,658 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x453bb109 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,660 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x453bb109-0x13fe879789b00e6 connected 2013-07-16 17:14:52,675 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,676 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e6 2013-07-16 17:14:52,691 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3cb058e6 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,691 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3cb058e6 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,693 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3cb058e6-0x13fe879789b00e7 connected 2013-07-16 17:14:52,706 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,707 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e7 2013-07-16 17:14:52,716 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:52,717 ERROR [RpcServer.handler=1,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:52,718 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:52,752 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4470e15b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,753 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4470e15b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,754 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4470e15b-0x13fe879789b00e8 connected 2013-07-16 17:14:52,769 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,769 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e8 2013-07-16 17:14:52,876 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x649a727b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:52,879 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x649a727b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:52,881 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x649a727b-0x13fe879789b00e9 connected 2013-07-16 17:14:52,908 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:52,908 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00e9 2013-07-16 17:14:53,121 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4b2555ab connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,122 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4b2555ab Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,126 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4b2555ab-0x13fe879789b00ea connected 2013-07-16 17:14:53,139 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,139 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ea 2013-07-16 17:14:53,219 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x36721689 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,221 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x36721689 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,223 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x36721689-0x13fe879789b00eb connected 2013-07-16 17:14:53,234 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,235 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00eb 2013-07-16 17:14:53,238 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:53,239 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:53,240 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:53,264 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7472e41b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,269 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7472e41b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,270 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7472e41b-0x13fe879789b00ec connected 2013-07-16 17:14:53,285 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,286 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ec 2013-07-16 17:14:53,290 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:53,290 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:53,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:53,350 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:53,412 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x22e9ac75 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,413 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x22e9ac75 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,414 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x22e9ac75-0x13fe879789b00ed connected 2013-07-16 17:14:53,442 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,442 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ed 2013-07-16 17:14:53,445 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:53,446 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:53,446 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:53,477 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x55a7e5ae connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,479 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x55a7e5ae Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,485 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x55a7e5ae-0x13fe879789b00ee connected 2013-07-16 17:14:53,493 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,493 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ee 2013-07-16 17:14:53,614 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xfff7a71 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,615 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xfff7a71 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,618 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xfff7a71-0x13fe879789b00ef connected 2013-07-16 17:14:53,629 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,629 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ef 2013-07-16 17:14:53,658 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x61ecc629 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,662 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x61ecc629 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,664 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x61ecc629-0x13fe879789b00f0 connected 2013-07-16 17:14:53,681 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,682 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f0 2013-07-16 17:14:53,842 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xd6a9883 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:53,843 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xd6a9883 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:53,852 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xd6a9883-0x13fe879789b00f1 connected 2013-07-16 17:14:53,856 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:53,856 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f1 2013-07-16 17:14:53,995 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xa2ff0ee connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,000 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xa2ff0ee Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,001 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xa2ff0ee-0x13fe879789b00f2 connected 2013-07-16 17:14:54,008 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,008 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f2 2013-07-16 17:14:54,011 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:54,012 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:54,014 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:54,019 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7c8b74b0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,023 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7c8b74b0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,024 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7c8b74b0-0x13fe879789b00f3 connected 2013-07-16 17:14:54,039 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,039 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f3 2013-07-16 17:14:54,151 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x144d3f4b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,154 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x144d3f4b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,157 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x144d3f4b-0x13fe879789b00f4 connected 2013-07-16 17:14:54,166 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,166 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f4 2013-07-16 17:14:54,170 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:54,170 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:54,171 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:54,185 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x53a020d5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,186 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x53a020d5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,191 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x53a020d5-0x13fe879789b00f5 connected 2013-07-16 17:14:54,198 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,199 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f5 2013-07-16 17:14:54,321 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x509c79d4 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,322 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x509c79d4 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,323 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x509c79d4-0x13fe879789b00f6 connected 2013-07-16 17:14:54,344 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,344 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f6 2013-07-16 17:14:54,351 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:54,385 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x652d03dc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,386 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x652d03dc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,387 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x652d03dc-0x13fe879789b00f7 connected 2013-07-16 17:14:54,415 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,416 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f7 2013-07-16 17:14:54,562 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x68bae075 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,565 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x68bae075 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,566 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x68bae075-0x13fe879789b00f8 connected 2013-07-16 17:14:54,585 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,586 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f8 2013-07-16 17:14:54,731 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4d118948 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,733 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4d118948 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,735 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4d118948-0x13fe879789b00f9 connected 2013-07-16 17:14:54,745 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,745 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00f9 2013-07-16 17:14:54,749 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:54,749 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:54,750 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:54,769 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2556db07 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,769 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2556db07 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,772 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2556db07-0x13fe879789b00fa connected 2013-07-16 17:14:54,789 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,790 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00fa 2013-07-16 17:14:54,894 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4bf42872 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,898 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4bf42872 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,899 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4bf42872-0x13fe879789b00fb connected 2013-07-16 17:14:54,906 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,907 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00fb 2013-07-16 17:14:54,910 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:54,910 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:54,911 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:54,915 INFO [RpcServer.handler=3,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1d163e1a connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:54,921 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1d163e1a Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:54,923 DEBUG [RpcServer.handler=3,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1d163e1a-0x13fe879789b00fc connected 2013-07-16 17:14:54,930 DEBUG [RpcServer.handler=3,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:54,931 INFO [RpcServer.handler=3,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00fc 2013-07-16 17:14:55,035 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x28a79601 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,054 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,054 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00fd 2013-07-16 17:14:55,057 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x28a79601 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,058 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x28a79601-0x13fe879789b00fd connected 2013-07-16 17:14:55,114 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x60092be8 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,116 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x60092be8 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,118 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x60092be8-0x13fe879789b00fe connected 2013-07-16 17:14:55,126 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,126 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00fe 2013-07-16 17:14:55,260 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6c9ebf46 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,266 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6c9ebf46 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,268 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6c9ebf46-0x13fe879789b00ff connected 2013-07-16 17:14:55,294 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,295 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b00ff 2013-07-16 17:14:55,299 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:55,299 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:21406) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:102) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:259) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:411) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:55,351 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:55,432 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5efb2e7b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,434 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5efb2e7b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,436 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5efb2e7b-0x13fe879789b0100 connected 2013-07-16 17:14:55,442 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,443 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0100 2013-07-16 17:14:55,445 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:55,446 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:55,447 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:55,452 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1e0c6cdf connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,455 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1e0c6cdf Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,456 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1e0c6cdf-0x13fe879789b0101 connected 2013-07-16 17:14:55,467 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,468 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0101 2013-07-16 17:14:55,585 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x44d37df9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,585 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x44d37df9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,587 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x44d37df9-0x13fe879789b0102 connected 2013-07-16 17:14:55,604 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,604 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0102 2013-07-16 17:14:55,608 WARN [hbase-repl-pool-16-thread-2] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:55,609 ERROR [RpcServer.handler=3,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:55,610 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:55,620 INFO [RpcServer.handler=4,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1d61f77c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,631 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1d61f77c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,633 DEBUG [RpcServer.handler=4,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1d61f77c-0x13fe879789b0103 connected 2013-07-16 17:14:55,646 DEBUG [RpcServer.handler=4,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,647 INFO [RpcServer.handler=4,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0103 2013-07-16 17:14:55,762 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2b7d6c0d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,767 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2b7d6c0d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,768 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2b7d6c0d-0x13fe879789b0104 connected 2013-07-16 17:14:55,782 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,782 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0104 2013-07-16 17:14:55,811 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6e69c927 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,814 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6e69c927 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,815 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6e69c927-0x13fe879789b0105 connected 2013-07-16 17:14:55,825 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,825 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0105 2013-07-16 17:14:55,987 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x71fc1432 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:55,990 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x71fc1432 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:55,991 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x71fc1432-0x13fe879789b0106 connected 2013-07-16 17:14:55,997 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:55,997 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0106 2013-07-16 17:14:56,134 INFO [hbase-repl-pool-16-thread-2] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x759d3b52 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,136 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x759d3b52 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,138 DEBUG [hbase-repl-pool-16-thread-2-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x759d3b52-0x13fe879789b0107 connected 2013-07-16 17:14:56,144 DEBUG [hbase-repl-pool-16-thread-2] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,144 INFO [hbase-repl-pool-16-thread-2] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0107 2013-07-16 17:14:56,147 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 639 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:56,147 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:56,148 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 639 actions: FailedServerException: 639 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:56,152 INFO [RpcServer.handler=0,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x466668f9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,154 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x466668f9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,155 DEBUG [RpcServer.handler=0,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x466668f9-0x13fe879789b0108 connected 2013-07-16 17:14:56,161 DEBUG [RpcServer.handler=0,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,161 INFO [RpcServer.handler=0,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0108 2013-07-16 17:14:56,266 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6afe1e30 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,269 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6afe1e30 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,270 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6afe1e30-0x13fe879789b0109 connected 2013-07-16 17:14:56,279 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,280 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0109 2013-07-16 17:14:56,302 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3eecda4b connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,308 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3eecda4b Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,310 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3eecda4b-0x13fe879789b010a connected 2013-07-16 17:14:56,325 DEBUG [hbase-repl-pool-16-thread-3] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,325 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010a 2013-07-16 17:14:56,329 WARN [hbase-repl-pool-16-thread-1] client.AsyncProcess(620): Attempt #4/4 failed for 315 operations on server ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 NOT resubmitting., tableName=test, location=region=test,,1373994855276.f3fce37071716f89a509124ef3fd1288., hostname=ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, seqNum=1 2013-07-16 17:14:56,330 ERROR [RpcServer.handler=4,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:56,331 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(661): Can't replicate because of an error on the remote cluster: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException): org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 315 actions: FailedServerException: 315 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:158) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:146) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:692) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2106) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) 2013-07-16 17:14:56,335 INFO [RpcServer.handler=2,port=55133] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x63bb4029 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,340 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x63bb4029 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,341 DEBUG [RpcServer.handler=2,port=55133-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x63bb4029-0x13fe879789b010b connected 2013-07-16 17:14:56,351 DEBUG [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor(1357): total tasks = 3 unassigned = 2 2013-07-16 17:14:56,357 DEBUG [RpcServer.handler=2,port=55133] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,357 INFO [RpcServer.handler=2,port=55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010b 2013-07-16 17:14:56,469 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1dc325b7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,470 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1dc325b7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,471 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1dc325b7-0x13fe879789b010c connected 2013-07-16 17:14:56,492 DEBUG [hbase-repl-pool-16-thread-1] client.ClientScanner(198): Finished region={ENCODED => 1028785192, NAME => '.META.,,1', STARTKEY => '', ENDKEY => ''} 2013-07-16 17:14:56,493 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010c 2013-07-16 17:14:56,495 WARN [Thread-595] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:56,495 WARN [Thread-595] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:56,496 WARN [Thread-595] client.ServerCallable(177): Call exception, tries=5, numRetries=6, retryTime=-12098ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:21370) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:290) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:147) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:55) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:219) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:611) at org.apache.hadoop.hbase.replication.TestReplicationQueueFailover.queueFailover(TestReplicationQueueFailover.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2013-07-16 17:14:56,496 DEBUG [Thread-595] client.HConnectionManager$HConnectionImplementation(1097): Removed all cached region locations that map to ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314 2013-07-16 17:14:56,538 INFO [pool-1-thread-1] hbase.ResourceChecker(171): after: replication.TestReplicationQueueFailoverCompressed#queueFailover Thread=538 (was 527) Potentially hanging thread: hbase-repl-pool-16-thread-1-SendThread(localhost:62127) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:219) org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1157) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1109) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: Async disk worker #0 for volume /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:485) org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1373) org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-table-pool-62-thread-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: DataStreamer for file /user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994883590 block BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:441) Potentially hanging thread: scan-prefetch-4-thread-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-repl-pool-16-thread-1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:610) org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) java.util.concurrent.FutureTask.run(FutureTask.java:138) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-table-pool-62-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50904-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: ResponseProcessor for block BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116) java.io.FilterInputStream.read(FilterInputStream.java:66) java.io.FilterInputStream.read(FilterInputStream.java:66) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1338) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:671) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.0 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) java.io.FilterInputStream.read(FilterInputStream.java:116) java.io.FilterInputStream.read(FilterInputStream.java:116) org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:538) java.io.BufferedInputStream.fill(BufferedInputStream.java:218) java.io.BufferedInputStream.read(BufferedInputStream.java:237) java.io.DataInputStream.readInt(DataInputStream.java:370) org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1052) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:704) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: DataXceiver for client DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736_1341351016_206 at /127.0.0.1:53517 [Receiving block BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) java.io.FilterInputStream.read(FilterInputStream.java:116) java.io.BufferedInputStream.fill(BufferedInputStream.java:218) java.io.BufferedInputStream.read1(BufferedInputStream.java:258) java.io.BufferedInputStream.read(BufferedInputStream.java:317) java.io.DataInputStream.read(DataInputStream.java:132) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:644) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:506) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:65) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: PacketResponder: BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179, type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:485) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:909) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: org.apache.hadoop.hdfs.SocketCache@236db810 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.SocketCache.run(SocketCache.java:242) org.apache.hadoop.hdfs.SocketCache.access$000(SocketCache.java:45) org.apache.hadoop.hdfs.SocketCache$1.run(SocketCache.java:122) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: Async disk worker #0 for volume /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-repl-pool-16-thread-3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:610) org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) java.util.concurrent.FutureTask.run(FutureTask.java:138) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: PacketResponder: BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179, type=HAS_DOWNSTREAM_IN_PIPELINE sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116) java.io.FilterInputStream.read(FilterInputStream.java:66) java.io.FilterInputStream.read(FilterInputStream.java:66) org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1338) org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:894) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-repl-pool-16-thread-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: Async disk worker #0 for volume /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: DataXceiver for client DFSClient_hb_rs_ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736_1341351016_206 at /127.0.0.1:40543 [Receiving block BP-182397264-10.197.55.49-1373994843896:blk_7656279180433659224_1179] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) java.io.FilterInputStream.read(FilterInputStream.java:116) java.io.BufferedInputStream.fill(BufferedInputStream.java:218) java.io.BufferedInputStream.read1(BufferedInputStream.java:258) java.io.BufferedInputStream.read(BufferedInputStream.java:317) java.io.DataInputStream.read(DataInputStream.java:132) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:644) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:506) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:65) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: Async disk worker #0 for volume /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion(SplitLogManager.java:416) org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:324) org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:425) org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:398) org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288) org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:191) org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: ReplicationExecutor-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: ReplicationExecutor-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: hbase-table-pool-62-thread-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) java.lang.Thread.run(Thread.java:662) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) Potentially hanging thread: IPC Client (845163371) connection to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 from ec2-user.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:657) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:703) - Thread LEAK? -, OpenFileDescriptor=837 (was 769) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=65536 (was 65536), SystemLoadAverage=278 (was 165) - SystemLoadAverage LEAK? -, ProcessCount=83 (was 82) - ProcessCount LEAK? -, AvailableMemoryMB=6197 (was 7679), ConnectionCount=10 (was 12) 2013-07-16 17:14:56,539 WARN [pool-1-thread-1] hbase.ResourceChecker(134): Thread=538 is superior to 500 2013-07-16 17:14:56,539 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(913): Shutting down minicluster 2013-07-16 17:14:56,540 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): Shutting down HBase Cluster 2013-07-16 17:14:56,540 INFO [pool-1-thread-1] master.HMaster(2254): Cluster shutdown requested 2013-07-16 17:14:56,540 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:56,540 INFO [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211-BalancerChore] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211-BalancerChore exiting 2013-07-16 17:14:56,543 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2013-07-16 17:14:56,541 INFO [CatalogJanitor-ip-10-197-55-49:50669] hbase.Chore(93): CatalogJanitor-ip-10-197-55-49:50669 exiting 2013-07-16 17:14:56,543 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/running 2013-07-16 17:14:56,541 INFO [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211-ClusterStatusChore] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211-ClusterStatusChore exiting 2013-07-16 17:14:56,545 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZKUtil(433): regionserver:55133-0x13fe879789b0012 Set watcher on znode that does not yet exist, /2/running 2013-07-16 17:14:56,543 INFO [pool-1-thread-1] regionserver.HRegionServer(1685): STOPPED: Shutdown requested 2013-07-16 17:14:56,545 INFO [pool-1-thread-1] regionserver.HRegionServer(1685): STOPPED: Shutdown requested 2013-07-16 17:14:56,546 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(433): master:50669-0x13fe879789b0011 Set watcher on znode that does not yet exist, /2/running 2013-07-16 17:14:56,547 ERROR [RpcServer.handler=0,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: java.io.InterruptedIOException: Interrupted. currentNumberOfTask=2, tableName=test, tasksDone=2 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:56,547 ERROR [RpcServer.handler=2,port=55133] regionserver.ReplicationSink(168): Unable to accept edit because: java.io.InterruptedIOException: Interrupted. currentNumberOfTask=1, tableName=test, tasksDone=1 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) 2013-07-16 17:14:56,548 WARN [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(681): Can't replicate because of a local or network error: java.io.InterruptedIOException: java.io.InterruptedIOException: Interrupted. currentNumberOfTask=2, tableName=test, tasksDone=2 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:96) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.InterruptedIOException): java.io.InterruptedIOException: Interrupted. currentNumberOfTask=2, tableName=test, tasksDone=2 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) ... 2 more 2013-07-16 17:14:56,552 INFO [RS:0;ip-10-197-55-49:55133] regionserver.SplitLogWorker(596): Sending interrupt to stop the worker thread 2013-07-16 17:14:56,552 INFO [RS:0;ip-10-197-55-49:55133] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager gracefully. 2013-07-16 17:14:56,552 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(594): Finishing writing output logs and closing down. 2013-07-16 17:14:56,553 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] wal.HLogSplitter(601): Processed 0 edits across 0 regions; log file=hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 is corrupted = false progress failed = false 2013-07-16 17:14:56,553 INFO [RS:0;ip-10-197-55-49:55133.logRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:14:56,553 WARN [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker$1(142): log splitting of .logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting/ip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314.1373994862658 interrupted, resigning java.io.InterruptedIOException at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:121) at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverFileLease(FSHDFSUtils.java:54) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:835) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:516) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:467) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:137) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:351) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:238) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:198) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:115) ... 9 more 2013-07-16 17:14:56,558 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(388): task execution interrupted because worker is exiting /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:56,553 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(909): stopping server ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:56,558 DEBUG [RS:0;ip-10-197-55-49:55133] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@73f378c8 2013-07-16 17:14:56,559 INFO [RS:0;ip-10-197-55-49:55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0015 2013-07-16 17:14:56,552 WARN [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(681): Can't replicate because of a local or network error: java.io.InterruptedIOException: java.io.InterruptedIOException: Interrupted. currentNumberOfTask=1, tableName=test, tasksDone=1 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:232) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:96) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:642) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:376) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.InterruptedIOException): java.io.InterruptedIOException: Interrupted. currentNumberOfTask=1, tableName=test, tasksDone=1 at org.apache.hadoop.hbase.client.AsyncProcess.waitForNextTaskDone(AsyncProcess.java:637) at org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:659) at org.apache.hadoop.hbase.client.AsyncProcess.waitUntilDone(AsyncProcess.java:670) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2103) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:689) at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:697) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:682) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:239) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:161) at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:174) at org.apache.hadoop.hbase.regionserver.HRegionServer.replicateWALEntry(HRegionServer.java:3756) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14402) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1856) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1387) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:15177) at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:94) ... 2 more 2013-07-16 17:14:56,558 INFO [Thread-353] regionserver.MemStoreFlusher$FlushHandler(267): Thread-353 exiting 2013-07-16 17:14:56,558 INFO [RS:0;ip-10-197-55-49:55133.compactionChecker] hbase.Chore(93): RS:0;ip-10-197-55-49:55133.compactionChecker exiting 2013-07-16 17:14:56,556 INFO [Thread-1749] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: java.io.EOFException java.io.IOException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: java.io.EOFException at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1419) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1052) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:704) 2013-07-16 17:14:56,553 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0MetaLogRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:14:56,563 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(462): successfully transitioned task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 to final state RESIGNED ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:56,569 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(396): worker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 done with task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 in 30646ms 2013-07-16 17:14:56,569 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] regionserver.SplitLogWorker(205): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 exiting 2013-07-16 17:14:56,563 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:56,563 INFO [RS:0;ip-10-197-55-49:55133] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager gracefully. 2013-07-16 17:14:56,562 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:56,569 INFO [RS:0;ip-10-197-55-49:55133] regionserver.CompactSplitThread(356): Waiting for Split Thread to finish... 2013-07-16 17:14:56,569 INFO [RS:0;ip-10-197-55-49:55133] regionserver.CompactSplitThread(356): Waiting for Merge Thread to finish... 2013-07-16 17:14:56,570 INFO [RS:0;ip-10-197-55-49:55133] regionserver.CompactSplitThread(356): Waiting for Large Compaction Thread to finish... 2013-07-16 17:14:56,570 INFO [RS:0;ip-10-197-55-49:55133] regionserver.CompactSplitThread(356): Waiting for Small Compaction Thread to finish... 2013-07-16 17:14:56,570 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(733): task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 entered state: RESIGNED ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:56,571 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(849): resubmitting task /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:56,572 WARN [Thread-1750] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:56,573 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 2013-07-16 17:14:56,573 WARN [Thread-1750] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:56,575 INFO [Thread-1750] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:56,576 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(728): task not yet acquired /2/splitlog/.logs%2Fip-10-197-55-49.us-west-1.compute.internal%2C39939%2C1373994850314-splitting%2Fip-10-197-55-49.us-west-1.compute.internal%252C39939%252C1373994850314.1373994862658 ver = 4 2013-07-16 17:14:56,577 INFO [pool-1-thread-1-EventThread] master.SplitLogManager(736): task /2/splitlog/RESCAN0000000007 entered state: DONE ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211 2013-07-16 17:14:56,580 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/splitlog/RESCAN0000000007 2013-07-16 17:14:56,580 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager$DeleteAsyncCallback(1553): deleted /2/splitlog/RESCAN0000000007 2013-07-16 17:14:56,580 DEBUG [pool-1-thread-1-EventThread] master.SplitLogManager(917): deleted task without in memory state /2/splitlog/RESCAN0000000007 2013-07-16 17:14:56,586 INFO [RS:0;ip-10-197-55-49:55133.periodicFlusher] hbase.Chore(93): RS:0;ip-10-197-55-49:55133.periodicFlusher exiting 2013-07-16 17:14:56,590 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1076): Waiting on 1 regions to close 2013-07-16 17:14:56,590 DEBUG [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(1080): {1028785192=.META.,,1.1028785192} 2013-07-16 17:14:56,590 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] handler.CloseRegionHandler(125): Processing close of .META.,,1.1028785192 2013-07-16 17:14:56,593 INFO [RS:0;ip-10-197-55-49:55133.leaseChecker] regionserver.Leases(124): RS:0;ip-10-197-55-49:55133.leaseChecker closing leases 2013-07-16 17:14:56,593 INFO [RS:0;ip-10-197-55-49:55133.leaseChecker] regionserver.Leases(131): RS:0;ip-10-197-55-49:55133.leaseChecker closed leases 2013-07-16 17:14:56,599 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegion(965): Closing .META.,,1.1028785192: disabling compactions & flushes 2013-07-16 17:14:56,600 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegion(987): Updates disabled for region .META.,,1.1028785192 2013-07-16 17:14:56,600 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegion(1492): Started memstore flush for .META.,,1.1028785192, current region memstore size 1008 2013-07-16 17:14:56,615 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:14:56,616 WARN [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.SplitLogManager(418): Stopped while waiting for log splits to be completed 2013-07-16 17:14:56,617 WARN [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] master.SplitLogManager(336): error while splitting logs in [hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting] installed = 7 but only 4 done 2013-07-16 17:14:56,629 ERROR [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-0] executor.EventHandler(133): Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: failed log splitting for ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314, will retry at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:310) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:197) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,39939,1373994850314-splitting] Task = installed = 7 done = 4 error = 0 at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:341) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:425) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:398) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:191) ... 4 more 2013-07-16 17:14:56,629 ERROR [MASTER_SERVER_OPERATIONS-ip-10-197-55-49:50669-1] executor.EventHandler(133): Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: Server is stopped at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:180) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:56,652 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 1 2013-07-16 17:14:56,672 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 1 2013-07-16 17:14:56,705 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x51211f6d connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:56,709 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_170006464095135473_1043{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:56,710 INFO [IPC Server handler 7 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_170006464095135473_1043 size 1320 2013-07-16 17:14:56,711 INFO [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4, memsize=1008, hasBloomFilter=false, into tmp file hdfs://localhost:56710/user/ec2-user/hbase/.META./1028785192/.tmp/3ebe08ca61ce419fa9e0853d4942935a 2013-07-16 17:14:56,711 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:56,712 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:56,712 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:56,713 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x51211f6d Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:56,756 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x51211f6d-0x13fe879789b010d connected 2013-07-16 17:14:56,763 ERROR [IPC Server handler 6 on 49060] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.2 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:14:56,770 WARN [RS_CLOSE_META-ip-10-197-55-49:55133-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:51438 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.2 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:14:56,782 INFO [Thread-1761] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:56,782 INFO [Thread-1764] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:56,786 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:56710/user/ec2-user/hbase/.META./1028785192/.tmp/3ebe08ca61ce419fa9e0853d4942935a as hdfs://localhost:56710/user/ec2-user/hbase/.META./1028785192/info/3ebe08ca61ce419fa9e0853d4942935a 2013-07-16 17:14:56,797 INFO [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HStore(759): Added hdfs://localhost:56710/user/ec2-user/hbase/.META./1028785192/info/3ebe08ca61ce419fa9e0853d4942935a, entries=4, sequenceid=4, filesize=1.3 K 2013-07-16 17:14:56,797 INFO [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegion(1636): Finished memstore flush of ~1008/1008, currentsize=0/0 for region .META.,,1.1028785192 in 197ms, sequenceid=4, compaction requested=false 2013-07-16 17:14:56,813 INFO [StoreCloserThread-.META.,,1.1028785192-1] regionserver.HStore(661): Closed info 2013-07-16 17:14:56,813 INFO [RS_CLOSE_META-ip-10-197-55-49:55133-0] regionserver.HRegion(1045): Closed .META.,,1.1028785192 2013-07-16 17:14:56,813 DEBUG [RS_CLOSE_META-ip-10-197-55-49:55133-0] handler.CloseRegionHandler(177): Closed region .META.,,1.1028785192 2013-07-16 17:14:56,865 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 2 2013-07-16 17:14:56,882 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 2 2013-07-16 17:14:56,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 34281, total replicated edits: 1992 2013-07-16 17:14:56,918 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-208ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:56,990 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(935): stopping server ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276; all regions closed. 2013-07-16 17:14:56,991 INFO [RS_OPEN_META-ip-10-197-55-49:55133-0.logSyncer] wal.FSHLog$LogSyncer(966): RS_OPEN_META-ip-10-197-55-49:55133-0.logSyncer exiting 2013-07-16 17:14:56,991 DEBUG [RS:0;ip-10-197-55-49:55133] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:56,994 INFO [IPC Server handler 6 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_-1629403050832523598_1018{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:51438|RBW], ReplicaUnderConstruction[127.0.0.1:47006|RBW]]} size 0 2013-07-16 17:14:56,995 INFO [IPC Server handler 9 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_-1629403050832523598_1018 size 334 2013-07-16 17:14:56,996 INFO [RS:0;ip-10-197-55-49:55133.logSyncer] wal.FSHLog$LogSyncer(966): RS:0;ip-10-197-55-49:55133.logSyncer exiting 2013-07-16 17:14:56,997 DEBUG [RS:0;ip-10-197-55-49:55133] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:56710/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:57,046 INFO [IPC Server handler 5 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51438 is added to blk_8228908424393699019_1044{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:57,046 INFO [IPC Server handler 4 on 56710] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47006 is added to blk_8228908424393699019_1044{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:47006|RBW], ReplicaUnderConstruction[127.0.0.1:51438|RBW]]} size 0 2013-07-16 17:14:57,062 DEBUG [RS:0;ip-10-197-55-49:55133] wal.FSHLog(768): Moved 2 WAL file(s) to /user/ec2-user/hbase/.oldlogs 2013-07-16 17:14:57,079 INFO [Thread-1772] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:57,091 INFO [Thread-1774] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:57,176 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 3 2013-07-16 17:14:57,177 INFO [RS:0;ip-10-197-55-49:55133] regionserver.Leases(124): RS:0;ip-10-197-55-49:55133 closing leases 2013-07-16 17:14:57,177 INFO [RS:0;ip-10-197-55-49:55133] regionserver.Leases(131): RS:0;ip-10-197-55-49:55133 closed leases 2013-07-16 17:14:57,190 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 3 2013-07-16 17:14:57,224 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-514ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,351 INFO [ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50669,1373994850211.splitLogManagerTimeoutMonitor exiting 2013-07-16 17:14:57,477 INFO [Thread-1777] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:57,491 INFO [Thread-1779] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:57,558 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:57,577 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 4 2013-07-16 17:14:57,590 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 4 2013-07-16 17:14:57,728 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1018ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,728 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010d 2013-07-16 17:14:57,730 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:14:56 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6dc74dfb, java.net.ConnectException: Connection refused Tue Jul 16 17:14:56 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6dc74dfb, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:57 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6dc74dfb, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:57 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6dc74dfb, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:14:57,731 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,731 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,732 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:14:57,733 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6a2361b0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:57,737 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6a2361b0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:57,738 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6a2361b0-0x13fe879789b010e connected 2013-07-16 17:14:57,747 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,747 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,748 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-8ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,953 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-213ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:57,978 INFO [Thread-1785] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:57,992 INFO [Thread-1787] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:58,078 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 5 2013-07-16 17:14:58,091 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 5 2013-07-16 17:14:58,256 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-516ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:58,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:14:58,560 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:58,579 WARN [Thread-1790] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:58,579 WARN [Thread-1790] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:58,580 INFO [Thread-1790] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:58,592 INFO [Thread-1792] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:58,678 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 6 2013-07-16 17:14:58,692 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 6 2013-07-16 17:14:58,760 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1020ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:58,761 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010e 2013-07-16 17:14:58,762 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:14:57 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d5a463c, java.net.ConnectException: Connection refused Tue Jul 16 17:14:57 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d5a463c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:58 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d5a463c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:58 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d5a463c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:14:58,763 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:14:58,777 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1e269ced connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:58,779 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1e269ced Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:58,789 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1e269ced-0x13fe879789b010f connected 2013-07-16 17:14:58,799 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:58,800 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:58,800 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-5ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,005 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-210ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,285 INFO [Thread-1798] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:59,293 INFO [Thread-1800] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:14:59,310 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-515ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,383 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 7 2013-07-16 17:14:59,392 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 7 2013-07-16 17:14:59,562 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:14:59,818 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1023ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,818 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b010f 2013-07-16 17:14:59,820 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:14:58 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@714d61fc, java.net.ConnectException: Connection refused Tue Jul 16 17:14:59 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@714d61fc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:59 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@714d61fc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:14:59 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@714d61fc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:14:59,821 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,821 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,822 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: Connection refused 2013-07-16 17:14:59,825 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3e8fcf27 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:14:59,825 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3e8fcf27 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:14:59,827 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3e8fcf27-0x13fe879789b0110 connected 2013-07-16 17:14:59,829 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,830 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:14:59,830 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,033 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,083 INFO [Thread-1807] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:00,093 INFO [Thread-1809] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:00,183 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 8 2013-07-16 17:15:00,193 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 8 2013-07-16 17:15:00,337 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,563 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:00,843 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,844 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0110 2013-07-16 17:15:00,846 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:14:59 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6e5d9c6d, java.net.ConnectException: Connection refused Tue Jul 16 17:15:00 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6e5d9c6d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:00 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6e5d9c6d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:00 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6e5d9c6d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:00,847 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:00,848 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x21ce6a57 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:00,851 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x21ce6a57 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:00,852 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x21ce6a57-0x13fe879789b0111 connected 2013-07-16 17:15:00,855 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,856 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,856 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:00,985 WARN [Thread-1815] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:00,986 WARN [Thread-1815] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:00,986 INFO [Thread-1815] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:00,994 INFO [Thread-1817] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:01,059 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,084 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 9 2013-07-16 17:15:01,093 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 9 2013-07-16 17:15:01,362 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,564 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:01,865 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1011ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,866 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0111 2013-07-16 17:15:01,868 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:00 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@440eca83, java.net.ConnectException: Connection refused Tue Jul 16 17:15:01 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@440eca83, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:01 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@440eca83, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:01 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@440eca83, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:01,868 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,869 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,869 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: Connection refused 2013-07-16 17:15:01,871 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x112da053 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:01,872 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x112da053 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:01,874 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x112da053-0x13fe879789b0112 connected 2013-07-16 17:15:01,876 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,876 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,877 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:01,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 39281, total replicated edits: 1992 2013-07-16 17:15:01,985 INFO [Thread-1824] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:01,994 INFO [Thread-1826] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:02,080 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:02,085 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:02,094 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:02,384 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:02,566 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:02,891 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:02,892 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0112 2013-07-16 17:15:02,893 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:01 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@44a6c9cc, java.net.ConnectException: Connection refused Tue Jul 16 17:15:02 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@44a6c9cc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:02 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@44a6c9cc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:02 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@44a6c9cc, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:02,894 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:02,895 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x198b5948 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:02,897 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x198b5948 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:02,898 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x198b5948-0x13fe879789b0113 connected 2013-07-16 17:15:02,900 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:02,901 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:02,901 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,086 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:03,086 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:03,087 WARN [Thread-1833] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:03,088 WARN [Thread-1833] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:03,088 INFO [Thread-1833] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:03,095 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:03,095 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:03,095 INFO [Thread-1835] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:03,104 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,187 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:03,195 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:03,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:03,407 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,567 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:03,914 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,915 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0113 2013-07-16 17:15:03,916 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:02 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@18dddc42, java.net.ConnectException: Connection refused Tue Jul 16 17:15:03 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@18dddc42, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:03 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@18dddc42, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:03 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@18dddc42, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:03,917 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,917 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,921 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x22c2222 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:03,924 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x22c2222 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:03,926 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x22c2222-0x13fe879789b0114 connected 2013-07-16 17:15:03,929 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,929 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:03,930 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,133 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,188 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:04,188 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:04,196 INFO [Thread-1843] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:04,196 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:04,197 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:04,205 INFO [Thread-1845] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:04,296 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:04,305 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:04,438 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,569 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:04,942 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,943 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0114 2013-07-16 17:15:04,944 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:03 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@46b12f3d, java.net.ConnectException: Connection refused Tue Jul 16 17:15:04 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@46b12f3d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:04 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@46b12f3d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:04 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@46b12f3d, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:04,950 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x73152e3f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:04,952 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x73152e3f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:04,954 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x73152e3f-0x13fe879789b0115 connected 2013-07-16 17:15:04,956 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,956 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:04,956 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,160 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,297 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:05,297 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:05,298 WARN [Thread-1852] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:05,298 WARN [Thread-1852] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:05,299 INFO [Thread-1852] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:05,306 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:05,306 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:05,307 INFO [Thread-1854] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:05,397 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:05,406 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:05,463 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,570 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:05,967 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,968 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0115 2013-07-16 17:15:05,971 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:04 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2f59f0d1, java.net.ConnectException: Connection refused Tue Jul 16 17:15:05 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2f59f0d1, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:05 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2f59f0d1, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:05 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2f59f0d1, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:05,972 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,972 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,973 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:05,974 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65ce70c5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:05,976 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x65ce70c5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:05,977 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x65ce70c5-0x13fe879789b0116 connected 2013-07-16 17:15:05,979 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,980 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:05,980 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:06,183 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:06,399 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:06,399 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:06,400 INFO [Thread-1861] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:06,407 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:06,408 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:06,408 INFO [Thread-1863] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:06,487 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:06,499 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:06,508 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:06,571 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:06,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 44281, total replicated edits: 1992 2013-07-16 17:15:06,990 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:06,991 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0116 2013-07-16 17:15:06,993 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:05 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@efd81de, java.net.ConnectException: Connection refused Tue Jul 16 17:15:06 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@efd81de, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:06 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@efd81de, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:06 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@efd81de, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:06,994 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:06,996 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x18749cf8 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:07,000 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x18749cf8 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:07,001 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x18749cf8-0x13fe879789b0117 connected 2013-07-16 17:15:07,002 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:07,003 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:07,003 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:07,206 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:07,499 DEBUG [M:0;ip-10-197-55-49:50904.oldLogCleaner] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994848151 2013-07-16 17:15:07,500 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:07,501 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:07,501 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-7924882966250519452_1072 127.0.0.1:39475 127.0.0.1:39876 2013-07-16 17:15:07,503 WARN [Thread-1871] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:07,503 WARN [Thread-1871] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:07,503 INFO [Thread-1871] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:07,509 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:07,509 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:07,513 DEBUG [M:0;ip-10-197-55-49:50904.oldLogCleaner] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994848150 2013-07-16 17:15:07,515 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_7055080296068517495_1069 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:15:07,519 DEBUG [M:0;ip-10-197-55-49:50904.oldLogCleaner] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994856409 2013-07-16 17:15:07,521 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-520ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:07,521 INFO [Thread-1873] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:07,522 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_7871647707537163371_1071 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:15:07,526 DEBUG [M:0;ip-10-197-55-49:50904.oldLogCleaner] master.ReplicationLogCleaner(109): Didn't find this log in ZK, deleting: ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994858083 2013-07-16 17:15:07,528 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(969): BLOCK* addToInvalidates: blk_-8549119146580241963_1076 127.0.0.1:39876 127.0.0.1:39475 2013-07-16 17:15:07,573 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:07,601 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:07,621 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:08,024 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1023ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,024 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0117 2013-07-16 17:15:08,026 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:07 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5d39a1e2, java.net.ConnectException: Connection refused Tue Jul 16 17:15:07 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5d39a1e2, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:07 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5d39a1e2, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:08 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5d39a1e2, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:08,027 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,027 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,028 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: Connection refused 2013-07-16 17:15:08,029 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4eca6cfe connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:08,031 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4eca6cfe Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:08,032 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4eca6cfe-0x13fe879789b0118 connected 2013-07-16 17:15:08,034 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,034 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,035 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,238 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:08,542 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:08,574 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:08,602 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:08,602 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:08,603 INFO [Thread-1882] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:08,622 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:08,622 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:08,623 INFO [Thread-1884] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:08,703 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:08,722 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:09,046 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,047 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0118 2013-07-16 17:15:09,048 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:08 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@581c63a8, java.net.ConnectException: Connection refused Tue Jul 16 17:15:08 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@581c63a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:08 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@581c63a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:09 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@581c63a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:09,049 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:09,051 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3413e584 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:09,053 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3413e584 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:09,054 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3413e584-0x13fe879789b0119 connected 2013-07-16 17:15:09,056 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,056 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,057 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,260 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,566 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-511ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:09,575 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:09,704 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:09,704 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:09,705 WARN [Thread-1891] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:09,706 WARN [Thread-1891] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:09,706 INFO [Thread-1891] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:09,726 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:09,726 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:09,729 INFO [Thread-1893] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:09,805 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:09,828 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:10,070 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,070 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0119 2013-07-16 17:15:10,072 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:09 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@153f2442, java.net.ConnectException: Connection refused Tue Jul 16 17:15:09 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@153f2442, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:09 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@153f2442, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:10 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@153f2442, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:10,073 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,073 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,074 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 300 because: Connection refused 2013-07-16 17:15:10,076 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x20b63513 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:10,077 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x20b63513 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:10,079 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x20b63513-0x13fe879789b011a connected 2013-07-16 17:15:10,082 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,082 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,083 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,286 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,576 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:10,589 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:10,736 INFO [M:0;ip-10-197-55-49:50669.oldLogCleaner] hbase.Chore(93): M:0;ip-10-197-55-49:50669.oldLogCleaner exiting 2013-07-16 17:15:10,736 INFO [M:0;ip-10-197-55-49:50669.oldLogCleaner] master.ReplicationLogCleaner(140): Stopping replicationLogCleaner-0x13fe879789b0017 2013-07-16 17:15:10,737 INFO [M:0;ip-10-197-55-49:50669.archivedHFileCleaner] hbase.Chore(93): M:0;ip-10-197-55-49:50669.archivedHFileCleaner exiting 2013-07-16 17:15:10,806 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:10,806 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:10,807 INFO [Thread-1900] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:10,829 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:10,829 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:10,829 INFO [Thread-1902] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:10,907 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:10,929 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:11,095 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,095 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011a 2013-07-16 17:15:11,097 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:10 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@74760977, java.net.ConnectException: Connection refused Tue Jul 16 17:15:10 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@74760977, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:10 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@74760977, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:11 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@74760977, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:11,098 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:11,099 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5c57cb connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:11,102 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5c57cb Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:11,103 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5c57cb-0x13fe879789b011b connected 2013-07-16 17:15:11,104 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,105 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,105 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,309 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,578 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:11,613 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:11,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 49281, total replicated edits: 1992 2013-07-16 17:15:11,908 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:11,908 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:11,909 WARN [Thread-1909] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:11,910 WARN [Thread-1909] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:11,910 INFO [Thread-1909] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:11,931 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:11,931 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:11,931 INFO [Thread-1911] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:12,009 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:12,031 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:12,119 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,119 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011b 2013-07-16 17:15:12,121 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:11 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@23b804dd, java.net.ConnectException: Connection refused Tue Jul 16 17:15:11 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@23b804dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:11 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@23b804dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:12 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@23b804dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:12,122 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,122 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,124 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x161858a5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:12,127 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x161858a5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:12,128 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x161858a5-0x13fe879789b011c connected 2013-07-16 17:15:12,133 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,133 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,133 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,337 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:12,579 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:12,643 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,010 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:13,011 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:13,014 INFO [Thread-1918] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:13,032 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:13,033 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:13,056 INFO [Thread-1920] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:13,112 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:13,136 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:13,149 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1018ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,149 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011c 2013-07-16 17:15:13,151 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:12 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@63d94e88, java.net.ConnectException: Connection refused Tue Jul 16 17:15:12 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@63d94e88, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:12 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@63d94e88, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:13 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@63d94e88, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:13,155 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x68cf5e45 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:13,157 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x68cf5e45 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:13,158 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x68cf5e45-0x13fe879789b011d connected 2013-07-16 17:15:13,162 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,162 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,163 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:13,366 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:13,581 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:13,670 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,115 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:14,115 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:14,116 WARN [Thread-1927] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:14,117 WARN [Thread-1927] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:14,117 INFO [Thread-1927] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:14,137 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:14,137 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:14,147 INFO [Thread-1929] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:14,176 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,176 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011d 2013-07-16 17:15:14,178 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:13 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@e3157a8, java.net.ConnectException: Connection refused Tue Jul 16 17:15:13 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@e3157a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:13 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@e3157a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:14 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@e3157a8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:14,179 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,179 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,179 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:14,186 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x13acb278 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:14,190 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,190 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,191 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,191 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x13acb278 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:14,193 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x13acb278-0x13fe879789b011e connected 2013-07-16 17:15:14,215 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:14,247 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:14,394 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:14,586 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:14,700 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,208 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1020ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,209 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011e 2013-07-16 17:15:15,211 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:14 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7cef5bbe, java.net.ConnectException: Connection refused Tue Jul 16 17:15:14 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7cef5bbe, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:14 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7cef5bbe, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:15 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7cef5bbe, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:15,212 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:15,213 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6bc137c7 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:15,218 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:15,218 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:15,220 INFO [Thread-1938] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:15,220 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6bc137c7 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:15,221 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6bc137c7-0x13fe879789b011f connected 2013-07-16 17:15:15,224 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,224 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,225 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,248 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:15,248 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:15,252 INFO [Thread-1941] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:15,319 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:15,352 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:15,429 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:15,588 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:15,734 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-511ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,251 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1028ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,251 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b011f 2013-07-16 17:15:16,253 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:15 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@12f8288e, java.net.ConnectException: Connection refused Tue Jul 16 17:15:15 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@12f8288e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:15 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@12f8288e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:16 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@12f8288e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:16,254 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,255 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,255 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: Connection refused 2013-07-16 17:15:16,268 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x46117487 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:16,272 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x46117487 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:16,274 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x46117487-0x13fe879789b0120 connected 2013-07-16 17:15:16,276 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,276 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,277 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,320 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:16,321 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:16,321 WARN [Thread-1948] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:16,322 WARN [Thread-1948] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:16,322 INFO [Thread-1948] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:16,353 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:16,353 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:16,354 INFO [Thread-1950] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:16,421 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:16,454 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:16,483 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-209ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,589 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:16,786 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:16,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 54281, total replicated edits: 1992 2013-07-16 17:15:17,289 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:17,290 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0120 2013-07-16 17:15:17,291 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:16 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1019f2a, java.net.ConnectException: Connection refused Tue Jul 16 17:15:16 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1019f2a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:16 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1019f2a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:17 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1019f2a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:17,292 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:17,293 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7e02856c connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:17,296 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7e02856c Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:17,297 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7e02856c-0x13fe879789b0121 connected 2013-07-16 17:15:17,300 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:17,300 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:17,300 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:17,423 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:17,423 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:17,424 INFO [Thread-1957] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:17,455 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:17,455 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:17,456 INFO [Thread-1959] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:17,503 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:17,524 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:17,556 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:17,591 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:17,806 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:18,313 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,314 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0121 2013-07-16 17:15:18,316 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:17 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2fb445a5, java.net.ConnectException: Connection refused Tue Jul 16 17:15:17 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2fb445a5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:17 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2fb445a5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:18 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2fb445a5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:18,316 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,317 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,317 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: Connection refused 2013-07-16 17:15:18,320 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7b33cec2 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:18,321 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7b33cec2 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:18,322 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7b33cec2-0x13fe879789b0122 connected 2013-07-16 17:15:18,326 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,326 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,327 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,525 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:18,526 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:18,527 WARN [Thread-1966] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:18,527 WARN [Thread-1966] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:18,527 INFO [Thread-1966] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:18,530 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:18,557 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:18,557 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:18,558 INFO [Thread-1969] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:18,593 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:18,626 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:18,657 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:18,834 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,336 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,337 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0122 2013-07-16 17:15:19,341 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:18 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5bb932cf, java.net.ConnectException: Connection refused Tue Jul 16 17:15:18 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5bb932cf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:18 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5bb932cf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:19 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@5bb932cf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:19,341 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 301 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:19,343 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xc182989 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:19,345 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xc182989 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:19,346 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xc182989-0x13fe879789b0123 connected 2013-07-16 17:15:19,348 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,348 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,349 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,551 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:19,594 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:19,627 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:19,628 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:19,630 INFO [Thread-1976] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:19,658 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:19,658 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:19,659 INFO [Thread-1978] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:19,730 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:19,759 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:19,856 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,363 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,364 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0123 2013-07-16 17:15:20,365 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:19 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2af426a0, java.net.ConnectException: Connection refused Tue Jul 16 17:15:19 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2af426a0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:19 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2af426a0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:20 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2af426a0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:20,366 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,366 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,369 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x68ca224f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:20,371 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x68ca224f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:20,373 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x68ca224f-0x13fe879789b0124 connected 2013-07-16 17:15:20,375 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,375 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,376 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,580 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:20,595 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:20,731 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:20,732 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:20,741 WARN [Thread-1985] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:20,742 WARN [Thread-1985] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:20,742 INFO [Thread-1985] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:20,760 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:20,760 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:20,761 INFO [Thread-1987] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:20,841 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:20,861 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:20,884 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,391 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1017ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,392 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0124 2013-07-16 17:15:21,394 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:20 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@8bae7aa, java.net.ConnectException: Connection refused Tue Jul 16 17:15:20 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@8bae7aa, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:20 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@8bae7aa, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:21 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@8bae7aa, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:21,405 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2e74946 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:21,408 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x2e74946 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:21,409 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x2e74946-0x13fe879789b0125 connected 2013-07-16 17:15:21,413 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,414 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,414 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,597 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:21,618 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,842 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:21,843 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:21,843 INFO [Thread-1994] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:21,862 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:21,862 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:21,863 INFO [Thread-1996] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:21,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 59281, total replicated edits: 1992 2013-07-16 17:15:21,921 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:21,943 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:21,962 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:22,427 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,428 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0125 2013-07-16 17:15:22,429 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:21 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@767aa36c, java.net.ConnectException: Connection refused Tue Jul 16 17:15:21 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@767aa36c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:21 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@767aa36c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:22 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@767aa36c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:22,430 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,430 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,431 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:22,432 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3d72d061 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:22,434 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3d72d061 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:22,435 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3d72d061-0x13fe879789b0126 connected 2013-07-16 17:15:22,437 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,437 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,438 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,599 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:22,640 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,945 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:22,945 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:22,946 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:22,947 WARN [Thread-2003] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:22,947 WARN [Thread-2003] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:22,947 INFO [Thread-2003] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:22,964 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:22,964 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:22,964 INFO [Thread-2006] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:23,046 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:23,064 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:23,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:23,453 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1017ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:23,454 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0126 2013-07-16 17:15:23,456 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:22 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3dbd89ef, java.net.ConnectException: Connection refused Tue Jul 16 17:15:22 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3dbd89ef, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:22 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3dbd89ef, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:23 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3dbd89ef, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:23,457 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:23,461 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x10a80d4e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:23,461 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x10a80d4e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:23,462 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x10a80d4e-0x13fe879789b0127 connected 2013-07-16 17:15:23,465 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:23,465 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:23,465 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:23,600 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:23,668 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:23,972 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,047 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:24,047 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:24,048 INFO [Thread-2013] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:24,065 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:24,065 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:24,066 INFO [Thread-2015] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:24,148 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:24,166 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:24,479 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,480 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0127 2013-07-16 17:15:24,482 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:23 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1c040a30, java.net.ConnectException: Connection refused Tue Jul 16 17:15:23 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1c040a30, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:23 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1c040a30, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:24 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@1c040a30, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:24,482 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,483 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,483 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: Connection refused 2013-07-16 17:15:24,486 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1d87e84f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:24,493 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,493 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,493 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:24,494 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1d87e84f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:24,497 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x1d87e84f-0x13fe879789b0128 connected 2013-07-16 17:15:24,601 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:24,696 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,001 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,149 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:25,150 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:25,150 WARN [Thread-2022] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:25,151 WARN [Thread-2022] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:25,151 INFO [Thread-2022] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:25,167 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:25,167 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:25,167 INFO [Thread-2024] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:25,250 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:25,268 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:25,507 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,507 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0128 2013-07-16 17:15:25,509 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:24 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@76d5b222, java.net.ConnectException: Connection refused Tue Jul 16 17:15:24 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@76d5b222, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:25 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@76d5b222, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:25 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@76d5b222, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:25,510 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:25,517 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f753aa9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:25,518 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4f753aa9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:25,520 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4f753aa9-0x13fe879789b0129 connected 2013-07-16 17:15:25,522 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,523 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,523 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:25,603 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:25,726 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,030 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,254 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:26,254 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:26,255 INFO [Thread-2031] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:26,269 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:26,269 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:26,270 INFO [Thread-2033] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:26,354 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:26,369 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:26,532 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1011ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,533 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0129 2013-07-16 17:15:26,535 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:25 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3da7d1b0, java.net.ConnectException: Connection refused Tue Jul 16 17:15:25 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3da7d1b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:26 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3da7d1b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:26 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3da7d1b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:26,535 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,536 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,536 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: Connection refused 2013-07-16 17:15:26,537 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4b4bb49e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:26,539 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4b4bb49e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:26,540 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4b4bb49e-0x13fe879789b012a connected 2013-07-16 17:15:26,542 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,543 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,543 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,604 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:26,746 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:26,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 64281, total replicated edits: 1992 2013-07-16 17:15:27,049 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:27,356 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:27,356 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:27,357 WARN [Thread-2040] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:27,358 WARN [Thread-2040] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:27,358 INFO [Thread-2040] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:27,371 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:27,371 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:27,372 INFO [Thread-2042] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:27,457 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:27,472 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:27,555 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:27,556 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012a 2013-07-16 17:15:27,557 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:26 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@481153e5, java.net.ConnectException: Connection refused Tue Jul 16 17:15:26 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@481153e5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:27 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@481153e5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:27 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@481153e5, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:27,558 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:27,559 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6796447f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:27,563 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6796447f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:27,564 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6796447f-0x13fe879789b012b connected 2013-07-16 17:15:27,566 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:27,566 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:27,566 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-1ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:27,605 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:27,770 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,075 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:28,459 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:28,459 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:28,460 INFO [Thread-2049] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:28,473 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:28,473 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:28,473 INFO [Thread-2051] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:28,559 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:28,573 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:28,579 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,580 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012b 2013-07-16 17:15:28,582 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:27 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@535265e, java.net.ConnectException: Connection refused Tue Jul 16 17:15:27 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@535265e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:28 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@535265e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:28 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@535265e, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:28,583 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,583 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,590 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x54353aff connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:28,597 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x54353aff Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:28,598 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x54353aff-0x13fe879789b012c connected 2013-07-16 17:15:28,600 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,601 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,601 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:28,607 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:28,805 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,110 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-511ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,561 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:29,561 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:29,565 WARN [Thread-2058] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:29,566 WARN [Thread-2058] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:29,566 INFO [Thread-2058] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:29,574 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:29,575 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:29,577 INFO [Thread-2060] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:29,609 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:29,615 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,615 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012c 2013-07-16 17:15:29,617 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:28 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@15de8130, java.net.ConnectException: Connection refused Tue Jul 16 17:15:28 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@15de8130, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:29 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@15de8130, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:29 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@15de8130, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:29,626 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5bc1a591 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:29,628 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5bc1a591 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:29,630 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5bc1a591-0x13fe879789b012d connected 2013-07-16 17:15:29,631 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,632 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,632 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:29,664 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:29,677 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:29,835 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,140 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,610 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:30,646 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,647 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012d 2013-07-16 17:15:30,649 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:29 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4776d266, java.net.ConnectException: Connection refused Tue Jul 16 17:15:29 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4776d266, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:30 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4776d266, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:30 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4776d266, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:30,649 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,650 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,650 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:30,657 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3f0c0cb9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:30,660 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3f0c0cb9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:30,662 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3f0c0cb9-0x13fe879789b012e connected 2013-07-16 17:15:30,662 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,662 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,662 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:30,670 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:30,670 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:30,671 INFO [Thread-2070] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:30,684 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:30,684 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:30,697 INFO [Thread-2072] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:30,771 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:30,788 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:30,868 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-208ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,173 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-513ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,612 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:31,677 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1017ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,677 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012e 2013-07-16 17:15:31,679 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:30 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4114c999, java.net.ConnectException: Connection refused Tue Jul 16 17:15:30 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4114c999, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:31 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4114c999, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:31 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4114c999, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:31,679 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:31,689 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x35c1d286 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:31,699 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,700 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,700 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:31,700 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x35c1d286 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:31,705 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x35c1d286-0x13fe879789b012f connected 2013-07-16 17:15:31,772 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:31,772 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:31,778 WARN [Thread-2079] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:31,778 WARN [Thread-2079] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:31,779 INFO [Thread-2079] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:31,789 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:31,789 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:31,789 INFO [Thread-2081] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:31,878 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:31,889 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:31,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 69281, total replicated edits: 1992 2013-07-16 17:15:31,904 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,206 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,613 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:32,710 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,711 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b012f 2013-07-16 17:15:32,713 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:31 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2a26263c, java.net.ConnectException: Connection refused Tue Jul 16 17:15:31 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2a26263c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:32 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2a26263c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:32 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@2a26263c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:32,713 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,714 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,714 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: Connection refused 2013-07-16 17:15:32,727 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x217b18c3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:32,732 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,732 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,733 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,733 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x217b18c3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:32,741 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x217b18c3-0x13fe879789b0130 connected 2013-07-16 17:15:32,879 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:32,879 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:32,880 INFO [Thread-2088] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:32,890 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:32,891 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:32,892 INFO [Thread-2090] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:32,935 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:32,980 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:32,991 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:33,239 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:33,618 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:33,743 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,744 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0130 2013-07-16 17:15:33,746 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:32 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4fb8896f, java.net.ConnectException: Connection refused Tue Jul 16 17:15:32 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4fb8896f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:33 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4fb8896f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:33 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4fb8896f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:33,747 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:33,749 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x656e5d8 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:33,751 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x656e5d8 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:33,753 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x656e5d8-0x13fe879789b0131 connected 2013-07-16 17:15:33,754 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,755 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,755 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,958 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:33,981 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:33,981 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:33,992 WARN [Thread-2098] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:33,992 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:33,993 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:33,993 WARN [Thread-2098] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:33,993 INFO [Thread-2098] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:33,997 INFO [Thread-2100] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:34,087 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:34,096 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:34,261 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,619 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:34,765 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,766 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0131 2013-07-16 17:15:34,767 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:33 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@86b7485, java.net.ConnectException: Connection refused Tue Jul 16 17:15:33 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@86b7485, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:34 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@86b7485, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:34 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@86b7485, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:34,768 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,768 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,769 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 301 because: Connection refused 2013-07-16 17:15:34,770 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x10668c26 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:34,774 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x10668c26 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:34,775 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,776 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,777 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:34,778 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x10668c26-0x13fe879789b0132 connected 2013-07-16 17:15:34,979 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:35,089 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:35,089 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:35,090 INFO [Thread-2107] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:35,097 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:35,098 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:35,098 INFO [Thread-2109] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:35,189 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:35,198 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:35,282 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:35,620 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:35,788 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:35,788 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0132 2013-07-16 17:15:35,790 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:34 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@615e29b9, java.net.ConnectException: Connection refused Tue Jul 16 17:15:34 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@615e29b9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:35 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@615e29b9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:35 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@615e29b9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:35,791 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:35,794 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x15ed46c0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:35,795 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x15ed46c0 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:35,796 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x15ed46c0-0x13fe879789b0133 connected 2013-07-16 17:15:35,801 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:35,802 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:35,802 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,006 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-207ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,191 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:36,191 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:36,192 WARN [Thread-2116] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:36,192 WARN [Thread-2116] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:36,193 INFO [Thread-2116] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:36,199 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:36,199 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:36,200 INFO [Thread-2118] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:36,292 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:36,299 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:36,311 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,623 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:36,814 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1015ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,815 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0133 2013-07-16 17:15:36,816 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:35 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@72b4a129, java.net.ConnectException: Connection refused Tue Jul 16 17:15:36 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@72b4a129, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:36 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@72b4a129, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:36 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@72b4a129, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:36,817 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,817 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,819 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x39736dd4 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:36,821 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x39736dd4 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:36,822 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x39736dd4-0x13fe879789b0134 connected 2013-07-16 17:15:36,825 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,825 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,825 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:36,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 74281, total replicated edits: 1992 2013-07-16 17:15:37,029 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:37,293 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:37,293 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:37,295 INFO [Thread-2125] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:37,300 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:37,301 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:37,305 INFO [Thread-2127] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:37,338 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-515ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:37,395 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:37,405 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:37,624 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:37,841 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1018ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:37,842 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0134 2013-07-16 17:15:37,844 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:36 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@669a5514, java.net.ConnectException: Connection refused Tue Jul 16 17:15:37 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@669a5514, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:37 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@669a5514, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:37 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@669a5514, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:37,846 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6f81f3d1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:37,849 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x6f81f3d1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:37,850 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x6f81f3d1-0x13fe879789b0135 connected 2013-07-16 17:15:37,863 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:37,864 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:37,864 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-8ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,068 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-212ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:38,382 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-526ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,396 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:38,396 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:38,397 WARN [Thread-2136] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:38,397 WARN [Thread-2136] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:38,397 INFO [Thread-2136] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:38,406 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:38,406 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:38,407 INFO [Thread-2138] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:38,496 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:38,506 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:38,626 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:38,889 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1033ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,890 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0135 2013-07-16 17:15:38,892 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:37 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@17cbbac9, java.net.ConnectException: Connection refused Tue Jul 16 17:15:38 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@17cbbac9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:38 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@17cbbac9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:38 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@17cbbac9, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:38,893 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,893 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,893 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:38,895 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7f574c12 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:38,898 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7f574c12 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:38,899 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7f574c12-0x13fe879789b0136 connected 2013-07-16 17:15:38,903 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,903 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:38,904 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-4ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,108 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-208ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,412 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-512ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,498 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:39,498 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:39,499 INFO [Thread-2145] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:39,507 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:39,508 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:39,508 INFO [Thread-2147] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:39,598 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:39,608 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:39,628 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:39,919 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1019ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,920 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0136 2013-07-16 17:15:39,921 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:38 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4771c0b8, java.net.ConnectException: Connection refused Tue Jul 16 17:15:39 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4771c0b8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:39 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4771c0b8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:39 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4771c0b8, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:39,922 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:39,924 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4e124a9 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:39,926 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4e124a9 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:39,927 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4e124a9-0x13fe879789b0137 connected 2013-07-16 17:15:39,929 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,929 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:39,930 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,133 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,438 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,600 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:40,600 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:40,601 WARN [Thread-2154] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:40,601 WARN [Thread-2154] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:40,602 INFO [Thread-2154] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:40,609 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:40,609 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:40,610 INFO [Thread-2156] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:40,629 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:40,701 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:40,709 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:40,941 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,942 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0137 2013-07-16 17:15:40,943 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:39 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4f26ed11, java.net.ConnectException: Connection refused Tue Jul 16 17:15:40 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4f26ed11, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:40 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4f26ed11, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:40 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@4f26ed11, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:40,944 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,944 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,945 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: Connection refused 2013-07-16 17:15:40,946 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f9db2da connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:40,948 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x4f9db2da Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:40,950 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x4f9db2da-0x13fe879789b0138 connected 2013-07-16 17:15:40,952 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,952 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:40,953 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,155 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,458 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,630 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:41,704 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:41,704 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:41,706 INFO [Thread-2163] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:41,711 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:41,711 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:41,713 INFO [Thread-2165] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:41,805 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:41,811 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:41,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 79281, total replicated edits: 1992 2013-07-16 17:15:41,963 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,964 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0138 2013-07-16 17:15:41,965 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:40 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@202b9f72, java.net.ConnectException: Connection refused Tue Jul 16 17:15:41 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@202b9f72, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:41 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@202b9f72, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:41 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@202b9f72, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:41,966 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:41,975 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x70fcc5b1 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:41,976 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x70fcc5b1 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:41,977 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x70fcc5b1-0x13fe879789b0139 connected 2013-07-16 17:15:41,983 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,983 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:41,984 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-4ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:42,187 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-207ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:42,493 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-513ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:42,632 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:42,807 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:42,807 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:42,811 WARN [Thread-2172] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:42,812 WARN [Thread-2172] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:42,812 INFO [Thread-2172] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:42,812 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:42,812 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:42,813 INFO [Thread-2174] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:42,908 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:42,913 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:43,007 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1027ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,008 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0139 2013-07-16 17:15:43,011 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:41 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@9530a8a, java.net.ConnectException: Connection refused Tue Jul 16 17:15:42 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@9530a8a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:42 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@9530a8a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:43 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@9530a8a, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:43,012 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,012 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,013 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 300 because: Connection refused 2013-07-16 17:15:43,034 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x42bab618 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:43,035 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x42bab618 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:43,038 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x42bab618-0x13fe879789b013a connected 2013-07-16 17:15:43,039 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,039 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,039 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,242 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:43,547 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:43,634 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:43,909 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:43,909 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:43,910 INFO [Thread-2181] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:43,914 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:43,914 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:43,914 INFO [Thread-2183] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:44,010 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:44,014 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:44,049 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,050 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013a 2013-07-16 17:15:44,051 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:43 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6d14693f, java.net.ConnectException: Connection refused Tue Jul 16 17:15:43 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6d14693f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:43 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6d14693f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:44 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6d14693f, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:44,052 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 300 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:44,053 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xb5b8941 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:44,055 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xb5b8941 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:44,056 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xb5b8941-0x13fe879789b013b connected 2013-07-16 17:15:44,058 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,058 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,059 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,262 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,565 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:44,635 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:45,011 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:45,012 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:45,013 WARN [Thread-2190] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:45,013 WARN [Thread-2190] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:45,013 INFO [Thread-2190] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:45,015 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:45,015 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:45,016 INFO [Thread-2192] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:45,070 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,070 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013b 2013-07-16 17:15:45,072 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:44 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@75c2b15c, java.net.ConnectException: Connection refused Tue Jul 16 17:15:44 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@75c2b15c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:44 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@75c2b15c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:45 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@75c2b15c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:45,072 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,073 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,074 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xea5bbf5 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:45,076 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xea5bbf5 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:45,077 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xea5bbf5-0x13fe879789b013c connected 2013-07-16 17:15:45,079 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,080 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,080 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,112 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:45,116 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:45,284 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,588 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:45,637 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:46,092 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,093 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013c 2013-07-16 17:15:46,094 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:45 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7006ea2c, java.net.ConnectException: Connection refused Tue Jul 16 17:15:45 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7006ea2c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:45 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7006ea2c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:46 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7006ea2c, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:46,098 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7bdb2d63 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:46,099 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7bdb2d63 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:46,100 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7bdb2d63-0x13fe879789b013d connected 2013-07-16 17:15:46,102 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,102 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,103 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,113 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:46,113 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:46,114 INFO [Thread-2202] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:46,117 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:46,117 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:46,117 INFO [Thread-2204] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:46,214 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:46,217 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:46,305 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,610 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:46,638 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:46,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 84281, total replicated edits: 1992 2013-07-16 17:15:47,117 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,118 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013d 2013-07-16 17:15:47,120 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:46 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6f058ec, java.net.ConnectException: Connection refused Tue Jul 16 17:15:46 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6f058ec, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:46 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6f058ec, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:47 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6f058ec, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:47,121 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,121 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,121 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:47,126 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7e8b62f3 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:47,132 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7e8b62f3 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:47,134 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7e8b62f3-0x13fe879789b013e connected 2013-07-16 17:15:47,134 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,134 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,135 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,215 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:47,215 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:47,216 WARN [Thread-2211] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:47,216 WARN [Thread-2211] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:47,217 INFO [Thread-2211] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:47,218 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:47,218 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:47,219 INFO [Thread-2213] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:47,316 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:47,318 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:47,337 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:47,640 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:47,642 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,145 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,146 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013e 2013-07-16 17:15:48,147 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:47 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a9da471, java.net.ConnectException: Connection refused Tue Jul 16 17:15:47 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a9da471, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:47 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a9da471, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:48 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a9da471, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:48,148 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:48,150 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5aafbf38 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:48,152 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x5aafbf38 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:48,155 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,155 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,156 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,156 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x5aafbf38-0x13fe879789b013f connected 2013-07-16 17:15:48,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:48,322 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:48,322 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:48,323 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:48,323 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:48,324 INFO [Thread-2221] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:48,324 INFO [Thread-2220] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 java.io.IOException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1419) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:48,358 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:48,423 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:48,423 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:48,641 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:48,665 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-511ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,170 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,171 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b013f 2013-07-16 17:15:49,172 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:48 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6641fdcf, java.net.ConnectException: Connection refused Tue Jul 16 17:15:48 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6641fdcf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:48 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6641fdcf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:49 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@6641fdcf, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:49,173 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,173 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,174 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 201 because: Connection refused 2013-07-16 17:15:49,181 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xd3f2d04 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:49,181 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0xd3f2d04 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:49,183 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0xd3f2d04-0x13fe879789b0140 connected 2013-07-16 17:15:49,185 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,186 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,186 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,388 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-204ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:49,424 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:49,424 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:49,425 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:49,425 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:49,426 WARN [Thread-2229] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:49,426 WARN [Thread-2229] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:49,427 INFO [Thread-2229] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:49,427 INFO [Thread-2230] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused java.net.ConnectException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1413) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:49,525 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:49,525 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:49,642 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:49,693 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,197 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,198 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0140 2013-07-16 17:15:50,200 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:49 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d2cd949, java.net.ConnectException: Connection refused Tue Jul 16 17:15:49 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d2cd949, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:49 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d2cd949, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:50 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@3d2cd949, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:50,200 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:50,202 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x53f7c06e connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:50,204 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x53f7c06e Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:50,205 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x53f7c06e-0x13fe879789b0141 connected 2013-07-16 17:15:50,210 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,210 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,211 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,414 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:50,526 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:50,526 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:50,526 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:50,527 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:50,527 INFO [Thread-2237] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:50,527 INFO [Thread-2238] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 java.io.IOException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1419) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:50,627 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:50,627 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:50,644 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:50,717 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,219 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1011ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,220 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0141 2013-07-16 17:15:51,222 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:50 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@10c131b0, java.net.ConnectException: Connection refused Tue Jul 16 17:15:50 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@10c131b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:50 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@10c131b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:51 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@10c131b0, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:51,222 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,223 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,223 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 302 because: Connection refused 2013-07-16 17:15:51,224 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x65cb5512 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:51,226 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x65cb5512 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:51,227 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x65cb5512-0x13fe879789b0142 connected 2013-07-16 17:15:51,229 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,229 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,230 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,433 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,629 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:51,629 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:51,630 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:51,630 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:51,631 WARN [Thread-2245] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:51,631 WARN [Thread-2245] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:51,631 INFO [Thread-2245] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:51,632 INFO [Thread-2246] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:51,645 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:51,730 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:51,730 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:51,738 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:51,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 89281, total replicated edits: 1992 2013-07-16 17:15:52,244 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1016ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,245 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0142 2013-07-16 17:15:52,247 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:51 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@274e30c7, java.net.ConnectException: Connection refused Tue Jul 16 17:15:51 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@274e30c7, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:51 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@274e30c7, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:52 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@274e30c7, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:52,247 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=2 of 4 failed; retrying after sleep of 300 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:52,248 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7080d6b4 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:52,250 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x7080d6b4 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:52,251 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x7080d6b4-0x13fe879789b0143 connected 2013-07-16 17:15:52,253 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,254 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,254 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,458 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,646 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:52,731 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:52,731 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:52,731 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:52,732 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:52,733 INFO [Thread-2254] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:52,733 INFO [Thread-2255] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 java.io.IOException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1419) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:52,760 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-508ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:52,832 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:52,833 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:53,264 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1012ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,265 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0143 2013-07-16 17:15:53,267 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:52 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a6e9226, java.net.ConnectException: Connection refused Tue Jul 16 17:15:52 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a6e9226, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:52 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a6e9226, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:53 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7a6e9226, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:53,268 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,268 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,270 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x66dce3dc connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:53,272 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x66dce3dc Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:53,276 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,276 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,277 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-3ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,278 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x66dce3dc-0x13fe879789b0144 connected 2013-07-16 17:15:53,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:53,480 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-206ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,648 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:53,783 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-509ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:53,833 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:53,834 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:53,834 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:53,834 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:53,835 WARN [Thread-2263] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:53,835 WARN [Thread-2263] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:53,836 INFO [Thread-2263] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:53,836 INFO [Thread-2264] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused java.net.ConnectException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1413) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:53,934 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:53,935 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:54,288 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1014ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,288 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0144 2013-07-16 17:15:54,290 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:53 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7d37cc67, java.net.ConnectException: Connection refused Tue Jul 16 17:15:53 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7d37cc67, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:53 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7d37cc67, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:54 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@7d37cc67, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:54,292 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x85ca668 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:54,294 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x85ca668 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:54,295 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x85ca668-0x13fe879789b0145 connected 2013-07-16 17:15:54,296 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,297 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,297 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,500 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,649 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:54,805 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:54,936 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:54,936 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:54,936 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:54,936 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:54,936 INFO [Thread-2271] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:54,936 INFO [Thread-2272] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:55,036 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:55,036 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:55,308 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1013ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,309 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0145 2013-07-16 17:15:55,311 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:54 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@21191f35, java.net.ConnectException: Connection refused Tue Jul 16 17:15:54 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@21191f35, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:54 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@21191f35, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:55 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@21191f35, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:55,311 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,312 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:913) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,312 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: Connection refused 2013-07-16 17:15:55,313 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x18eefb41 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:55,315 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x18eefb41 Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:55,317 DEBUG [hbase-repl-pool-16-thread-1-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x18eefb41-0x13fe879789b0146 connected 2013-07-16 17:15:55,318 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,319 WARN [hbase-repl-pool-16-thread-1] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,319 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,522 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-205ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:55,651 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:55,827 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-510ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,038 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:56,038 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:56,038 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:56,038 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:56,039 WARN [Thread-2280] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:56,039 WARN [Thread-2280] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:56,039 INFO [Thread-2280] regionserver.ReplicationSource$2(799): Slave cluster looks down: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:56,040 INFO [Thread-2281] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused java.net.ConnectException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1413) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:56,138 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:56,138 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:56,334 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=3, numRetries=4, retryTime=-1017ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,334 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0146 2013-07-16 17:15:56,336 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=4, exceptions: Tue Jul 16 17:15:55 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@35b102dd, java.net.ConnectException: Connection refused Tue Jul 16 17:15:55 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@35b102dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:55 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@35b102dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 Tue Jul 16 17:15:56 UTC 2013, org.apache.hadoop.hbase.client.HTable$2@35b102dd, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:205) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) ... 18 more 2013-07-16 17:15:56,336 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=0 of 4 failed; retrying after sleep of 100 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:56,338 INFO [hbase-repl-pool-16-thread-3] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x3bc8a16f connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:56,340 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3bc8a16f Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2013-07-16 17:15:56,342 DEBUG [hbase-repl-pool-16-thread-3-EventThread] zookeeper.ZooKeeperWatcher(384): hconnection-0x3bc8a16f-0x13fe879789b0147 connected 2013-07-16 17:15:56,344 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(584): Not able to close an output stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getOutputStream(SocketAdaptor.java:242) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:580) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,344 WARN [hbase-repl-pool-16-thread-3] ipc.RpcClient$Connection(591): Not able to close an input stream java.net.SocketException: Socket is closed at sun.nio.ch.SocketAdaptor.getInputStream(SocketAdaptor.java:220) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.closeConnection(RpcClient.java:587) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleConnectionFailure(RpcClient.java:629) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:568) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,345 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-2ms java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:555) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:843) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,554 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=1, numRetries=4, retryTime=-211ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,652 INFO [M:0;ip-10-197-55-49:50669] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:56,857 WARN [hbase-repl-pool-16-thread-3] client.ServerCallable(177): Call exception, tries=2, numRetries=4, retryTime=-514ms org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:21334) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1264) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:597) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:595) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:175) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:56,903 INFO [ip-10-197-55-49:55133Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Sink: age in ms of last applied edit: 94281, total replicated edits: 1992 2013-07-16 17:15:57,140 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:57,140 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 1 rs from peer cluster # 2 2013-07-16 17:15:57,140 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:57,140 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(217): Choosing peer ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:57,141 INFO [Thread-2288] regionserver.ReplicationSource$2(799): Slave cluster looks down: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) 2013-07-16 17:15:57,141 INFO [Thread-2289] regionserver.ReplicationSource$2(799): Slave cluster looks down: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 java.io.IOException: Call to ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 failed on local exception: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1419) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1391) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1591) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1648) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.getServerInfo(AdminProtos.java:15213) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1466) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$2.run(ReplicationSource.java:793) Caused by: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:828) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1473) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1365) ... 5 more 2013-07-16 17:15:57,180 INFO [RS:0;ip-10-197-55-49:55133] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0018 2013-07-16 17:15:57,180 INFO [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0147 2013-07-16 17:15:57,181 WARN [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: java.io.IOException: Interrupted after 2 tries on 4 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:233) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:230) ... 18 more 2013-07-16 17:15:57,181 DEBUG [hbase-repl-pool-16-thread-3] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:57,183 INFO [hbase-repl-pool-16-thread-1] zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x5309b8c0 connecting to ZooKeeper ensemble=localhost:62127 2013-07-16 17:15:57,183 WARN [hbase-repl-pool-16-thread-1] zookeeper.ZKUtil(489): hconnection-0x5309b8c0 Unable to set watcher on znode (/2/hbaseid) java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:588) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:506) at sun.reflect.GeneratedConstructorAccessor24.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:293) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:252) at org.apache.hadoop.hbase.client.HTable.(HTable.java:173) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,183 DEBUG [hbase-repl-pool-16-thread-1] zookeeper.ZooKeeperWatcher(455): hconnection-0x5309b8c0 Received InterruptedException, doing nothing here java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:482) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:588) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:506) at sun.reflect.GeneratedConstructorAccessor24.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:293) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:252) at org.apache.hadoop.hbase.client.HTable.(HTable.java:173) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,184 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:57,184 INFO [hbase-repl-pool-16-thread-1] client.ZooKeeperRegistry(85): ClusterId read in ZooKeeper is null 2013-07-16 17:15:57,184 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(591): clusterid came back null, using default default-cluster 2013-07-16 17:15:57,184 INFO [pool-1-thread-1-EventThread] zookeeper.RegionServerTracker(94): RegionServer ephemeral node deleted, processing expiration [ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276] 2013-07-16 17:15:57,184 WARN [hbase-repl-pool-16-thread-1] zookeeper.ZKUtil(698): hconnection-0x5309b8c0 Unable to get data of znode /2/meta-region-server java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1149) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:309) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:686) at org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1765) at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:183) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:790) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:258) at org.apache.hadoop.hbase.client.HTable.(HTable.java:190) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,186 DEBUG [hbase-repl-pool-16-thread-1] zookeeper.ZooKeeperWatcher(455): hconnection-0x5309b8c0 Received InterruptedException, doing nothing here java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1149) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:309) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:686) at org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1765) at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:183) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:790) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:258) at org.apache.hadoop.hbase.client.HTable.(HTable.java:190) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,186 WARN [hbase-repl-pool-16-thread-1] zookeeper.ZKUtil(698): hconnection-0x5309b8c0 Unable to get data of znode /2/meta-region-server java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1149) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:309) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:686) at org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1765) at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:183) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:790) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:643) at org.apache.hadoop.hbase.client.ServerCallable.prepare(ServerCallable.java:97) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:174) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,186 DEBUG [hbase-repl-pool-16-thread-1] zookeeper.ZooKeeperWatcher(455): hconnection-0x5309b8c0 Received InterruptedException, doing nothing here java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1149) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:309) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:686) at org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1765) at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:183) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:790) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:643) at org.apache.hadoop.hbase.client.ServerCallable.prepare(ServerCallable.java:97) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:174) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,185 INFO [pool-1-thread-1-EventThread] master.ServerManager(494): Cluster shutdown set; ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 expired; onlineServers=0 2013-07-16 17:15:57,187 WARN [hbase-repl-pool-16-thread-1] client.ServerCallable(177): Call exception, tries=0, numRetries=4, retryTime=-1ms java.io.IOException: Failed to find location, tableName=.META., row=test,hwy,99999999999999, reload=false at org.apache.hadoop.hbase.client.ServerCallable.prepare(ServerCallable.java:99) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:174) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,185 DEBUG [M:0;ip-10-197-55-49:50669] master.HMaster(1148): Stopping service threads 2013-07-16 17:15:57,187 INFO [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x0 2013-07-16 17:15:57,184 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/rs/ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276 2013-07-16 17:15:57,188 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(964): stopping server ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276; zookeeper connection closed. 2013-07-16 17:15:57,189 INFO [RS:0;ip-10-197-55-49:55133] regionserver.HRegionServer(967): RS:0;ip-10-197-55-49:55133 exiting 2013-07-16 17:15:57,187 INFO [pool-1-thread-1-EventThread] master.HMaster(2254): Cluster shutdown set; onlineServer=0 2013-07-16 17:15:57,189 WARN [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(844): Encountered problems when prefetch META table: java.io.IOException: Interrupted after 0 tries on 4 at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:233) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:595) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:135) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:901) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:793) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:762) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:285) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370) at org.apache.hadoop.hbase.client.AsyncProcess.receiveMultiAction(AsyncProcess.java:618) at org.apache.hadoop.hbase.client.AsyncProcess.access$300(AsyncProcess.java:85) at org.apache.hadoop.hbase.client.AsyncProcess$1.run(AsyncProcess.java:419) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:230) ... 18 more 2013-07-16 17:15:57,189 DEBUG [RS:0;ip-10-197-55-49:55133-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:55133-0x13fe879789b0012 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:15:57,189 DEBUG [pool-1-thread-1-EventThread] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@13a4071a 2013-07-16 17:15:57,190 INFO [pool-1-thread-1-EventThread] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0014 2013-07-16 17:15:57,192 DEBUG [hbase-repl-pool-16-thread-1] client.HConnectionManager$HConnectionImplementation(976): locateRegionInMeta parentTable=.META., metaLocation={region=.META.,,1.1028785192, hostname=ip-10-197-55-49.us-west-1.compute.internal,55133,1373994850276, seqNum=0}, attempt=1 of 4 failed; retrying after sleep of 200 because: This server is in the failed servers list: ip-10-197-55-49.us-west-1.compute.internal/10.197.55.49:55133 2013-07-16 17:15:57,192 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/2/rs 2013-07-16 17:15:57,193 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@650a2bb0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@650a2bb0 2013-07-16 17:15:57,194 INFO [pool-1-thread-1] util.JVMClusterUtil(309): Shutdown of 1 master(s) and 2 regionserver(s) complete 2013-07-16 17:15:57,194 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50669-0x13fe879789b0011 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/2/master 2013-07-16 17:15:57,195 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b000f 2013-07-16 17:15:57,195 WARN [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /2/master 2013-07-16 17:15:57,195 INFO [pool-1-thread-1-EventThread] util.RetryCounter(54): Sleeping 20ms before retry #1... 2013-07-16 17:15:57,196 INFO [M:0;ip-10-197-55-49:50669] master.HMaster(597): HMaster main thread exiting 2013-07-16 17:15:57,197 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b001a 2013-07-16 17:15:57,198 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b0010 2013-07-16 17:15:57,200 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b001b 2013-07-16 17:15:57,201 WARN [pool-1-thread-1] datanode.DirectoryScanner(289): DirectoryScanner: shutdown has been called 2013-07-16 17:15:57,208 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:57,216 WARN [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/master 2013-07-16 17:15:57,216 ERROR [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(240): ZooKeeper exists failed after 1 retries 2013-07-16 17:15:57,216 WARN [pool-1-thread-1-EventThread] zookeeper.ZKUtil(437): master:50669-0x13fe879789b0011 Unable to set watcher on znode /2/master org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:57,216 ERROR [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(439): master:50669-0x13fe879789b0011 Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:57,216 FATAL [pool-1-thread-1-EventThread] master.HMaster(2062): Master server abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:15:57,216 FATAL [pool-1-thread-1-EventThread] master.HMaster(2067): Unexpected exception handling nodeDeleted event org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /2/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:57,216 INFO [pool-1-thread-1-EventThread] master.HMaster(2254): Aborting 2013-07-16 17:15:57,240 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:57,241 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(579): Since we are unable to replicate, sleeping 100 times 10 2013-07-16 17:15:57,315 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data4] heartbeating to localhost/127.0.0.1:56710] datanode.BPServiceActor(575): BPOfferService for Block pool BP-1477359609-10.197.55.49-1373994849464 (storage id DS-248467811-10.197.55.49-47006-1373994850069) service to localhost/127.0.0.1:56710 interrupted 2013-07-16 17:15:57,315 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data4] heartbeating to localhost/127.0.0.1:56710] datanode.BPServiceActor(685): Ending block pool service for: Block pool BP-1477359609-10.197.55.49-1373994849464 (storage id DS-248467811-10.197.55.49-47006-1373994850069) service to localhost/127.0.0.1:56710 2013-07-16 17:15:57,317 WARN [pool-1-thread-1] datanode.DirectoryScanner(289): DirectoryScanner: shutdown has been called 2013-07-16 17:15:57,321 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:57,365 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data1,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data2] heartbeating to localhost/127.0.0.1:56710] datanode.BPServiceActor(685): Ending block pool service for: Block pool BP-1477359609-10.197.55.49-1373994849464 (storage id DS-800074225-10.197.55.49-51438-1373994849885) service to localhost/127.0.0.1:56710 2013-07-16 17:15:57,432 WARN [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4f163cdc] blockmanagement.BlockManager$ReplicationMonitor(3081): ReplicationMonitor thread received InterruptedException. java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3079) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:57,433 WARN [org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@4f19c297] blockmanagement.DecommissionManager$Monitor(78): Monitor interrupted: java.lang.InterruptedException: sleep interrupted 2013-07-16 17:15:57,448 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:57,585 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(922): Minicluster is down 2013-07-16 17:15:57,585 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(913): Shutting down minicluster 2013-07-16 17:15:57,585 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): Shutting down HBase Cluster 2013-07-16 17:15:57,585 INFO [pool-1-thread-1] master.HMaster(2254): Cluster shutdown requested 2013-07-16 17:15:57,585 INFO [M:0;ip-10-197-55-49:50904] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:57,586 INFO [CatalogJanitor-ip-10-197-55-49:50904] hbase.Chore(93): CatalogJanitor-ip-10-197-55-49:50904 exiting 2013-07-16 17:15:57,586 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-BalancerChore] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-BalancerChore exiting 2013-07-16 17:15:57,586 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-ClusterStatusChore] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499-ClusterStatusChore exiting 2013-07-16 17:15:57,589 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2013-07-16 17:15:57,589 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/running 2013-07-16 17:15:57,591 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZKUtil(433): regionserver:49041-0x13fe879789b0006 Set watcher on znode that does not yet exist, /1/running 2013-07-16 17:15:57,591 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(433): master:50904-0x13fe879789b0004 Set watcher on znode that does not yet exist, /1/running 2013-07-16 17:15:57,592 INFO [pool-1-thread-1] regionserver.HRegionServer(1685): STOPPED: Shutdown requested 2013-07-16 17:15:57,592 INFO [pool-1-thread-1] regionserver.HRegionServer(1685): STOPPED: Shutdown requested 2013-07-16 17:15:57,594 INFO [RS:0;ip-10-197-55-49:49041] regionserver.SplitLogWorker(596): Sending interrupt to stop the worker thread 2013-07-16 17:15:57,594 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(281): SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 2013-07-16 17:15:57,594 INFO [RS:0;ip-10-197-55-49:49041] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager gracefully. 2013-07-16 17:15:57,595 INFO [Thread-159] regionserver.MemStoreFlusher$FlushHandler(267): Thread-159 exiting 2013-07-16 17:15:57,594 INFO [SplitLogWorker-ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] regionserver.SplitLogWorker(205): SplitLogWorker ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 exiting 2013-07-16 17:15:57,594 INFO [RS:0;ip-10-197-55-49:49041.logRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:15:57,595 INFO [RS:0;ip-10-197-55-49:49041.compactionChecker] hbase.Chore(93): RS:0;ip-10-197-55-49:49041.compactionChecker exiting 2013-07-16 17:15:57,595 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0MetaLogRoller] regionserver.LogRoller(119): LogRoller exiting. 2013-07-16 17:15:57,597 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(909): stopping server ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:57,597 DEBUG [RS:0;ip-10-197-55-49:49041] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@1c8697ce 2013-07-16 17:15:57,597 INFO [RS:0;ip-10-197-55-49:49041] snapshot.RegionServerSnapshotManager(151): Stopping RegionServerSnapshotManager gracefully. 2013-07-16 17:15:57,597 INFO [RS:0;ip-10-197-55-49:49041] regionserver.CompactSplitThread(356): Waiting for Split Thread to finish... 2013-07-16 17:15:57,597 INFO [RS:0;ip-10-197-55-49:49041] regionserver.CompactSplitThread(356): Waiting for Merge Thread to finish... 2013-07-16 17:15:57,598 INFO [RS:0;ip-10-197-55-49:49041] regionserver.CompactSplitThread(356): Waiting for Large Compaction Thread to finish... 2013-07-16 17:15:57,598 INFO [RS:0;ip-10-197-55-49:49041] regionserver.CompactSplitThread(356): Waiting for Small Compaction Thread to finish... 2013-07-16 17:15:57,599 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(125): Processing close of test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:15:57,600 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(965): Closing test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.: disabling compactions & flushes 2013-07-16 17:15:57,600 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(987): Updates disabled for region test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:15:57,600 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1492): Started memstore flush for test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b., current region memstore size 115.3 K 2013-07-16 17:15:57,600 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:15:57,601 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.: disabling compactions & flushes 2013-07-16 17:15:57,601 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:15:57,600 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:15:57,601 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.: disabling compactions & flushes 2013-07-16 17:15:57,602 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:15:57,602 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(1076): Waiting on 27 regions to close 2013-07-16 17:15:57,603 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of .META.,,1.1028785192 2013-07-16 17:15:57,603 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing .META.,,1.1028785192: disabling compactions & flushes 2013-07-16 17:15:57,603 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region .META.,,1.1028785192 2013-07-16 17:15:57,603 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for .META.,,1.1028785192, current region memstore size 35.3 K 2013-07-16 17:15:57,609 INFO [StoreCloserThread-test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,609 INFO [StoreCloserThread-test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,609 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:15:57,609 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,kkk,1373994853026.d29efc5b487c6ba1411a330e6ea9abfc. 2013-07-16 17:15:57,609 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:15:57,618 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.: disabling compactions & flushes 2013-07-16 17:15:57,618 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:15:57,618 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b., current region memstore size 168 2013-07-16 17:15:57,640 INFO [StoreCloserThread-test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,640 INFO [StoreCloserThread-test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,640 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:15:57,640 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,bbb,1373994853025.8ad63e6b6a48baaedae6985e87d53061. 2013-07-16 17:15:57,641 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:15:57,646 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.: disabling compactions & flushes 2013-07-16 17:15:57,646 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:15:57,646 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1492): Started memstore flush for test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820., current region memstore size 115.3 K 2013-07-16 17:15:57,646 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,648 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,650 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,654 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,670 INFO [RS:0;ip-10-197-55-49:49041.periodicFlusher] hbase.Chore(93): RS:0;ip-10-197-55-49:49041.periodicFlusher exiting 2013-07-16 17:15:57,683 INFO [RS:0;ip-10-197-55-49:49041.leaseChecker] regionserver.Leases(124): RS:0;ip-10-197-55-49:49041.leaseChecker closing leases 2013-07-16 17:15:57,683 INFO [RS:0;ip-10-197-55-49:49041.leaseChecker] regionserver.Leases(131): RS:0;ip-10-197-55-49:49041.leaseChecker closed leases 2013-07-16 17:15:57,758 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_4411131596586522719_1189{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:57,759 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_4411131596586522719_1189{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:57,771 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4952, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/c4611b71a935e3b170cd961ded7d0820/.tmp/2ef26bc04d394e75a4fafee4e9fc37cb 2013-07-16 17:15:57,791 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3794746661236953029_1188{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 21685 2013-07-16 17:15:57,791 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_1586094623660763127_1186{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 1003 2013-07-16 17:15:57,791 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:57,792 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3794746661236953029_1188 size 21685 2013-07-16 17:15:57,792 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_1586094623660763127_1186 size 1003 2013-07-16 17:15:57,793 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-850577495231335485_1185{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 17751 2013-07-16 17:15:57,793 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:15:57,794 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-850577495231335485_1185 size 17751 2013-07-16 17:15:57,799 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/c4611b71a935e3b170cd961ded7d0820/.tmp/2ef26bc04d394e75a4fafee4e9fc37cb as hdfs://localhost:43175/user/ec2-user/hbase/test/c4611b71a935e3b170cd961ded7d0820/f/2ef26bc04d394e75a4fafee4e9fc37cb 2013-07-16 17:15:57,807 ERROR [IPC Server handler 2 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:57,808 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:57,810 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/c4611b71a935e3b170cd961ded7d0820/f/2ef26bc04d394e75a4fafee4e9fc37cb, entries=703, sequenceid=4952, filesize=21.2 K 2013-07-16 17:15:57,810 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. in 164ms, sequenceid=4952, compaction requested=false 2013-07-16 17:15:57,815 INFO [StoreCloserThread-test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,817 INFO [StoreCloserThread-test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,818 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:15:57,818 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,rrr,1373994853027.c4611b71a935e3b170cd961ded7d0820. 2013-07-16 17:15:57,818 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:15:57,818 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.: disabling compactions & flushes 2013-07-16 17:15:57,818 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:15:57,821 INFO [StoreCloserThread-test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,821 INFO [StoreCloserThread-test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,821 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:15:57,822 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,jjj,1373994853026.ba6e592748955d732d7843b9603163dc. 2013-07-16 17:15:57,822 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:15:57,822 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.: disabling compactions & flushes 2013-07-16 17:15:57,822 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:15:57,825 INFO [StoreCloserThread-test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,826 INFO [StoreCloserThread-test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,826 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:15:57,826 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,,1373994853021.64c33257daeacd0fe5bf6a175319eadb. 2013-07-16 17:15:57,826 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:15:57,826 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.: disabling compactions & flushes 2013-07-16 17:15:57,827 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:15:57,828 INFO [StoreCloserThread-test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,829 INFO [StoreCloserThread-test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,829 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:15:57,829 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,fff,1373994853025.7050f74c0058e5a7a912d72a5fd1f4fa. 2013-07-16 17:15:57,829 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:15:57,830 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.: disabling compactions & flushes 2013-07-16 17:15:57,830 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:15:57,830 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1492): Started memstore flush for test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae., current region memstore size 115.3 K 2013-07-16 17:15:57,835 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,868 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3624572500505827959_1191{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:57,869 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3624572500505827959_1191{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:57,872 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4953, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/4ac8676e6af9c1c25f2f2a90ed99d3ae/.tmp/38928bf1b05d42a883fb5fd96247dd60 2013-07-16 17:15:57,883 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/4ac8676e6af9c1c25f2f2a90ed99d3ae/.tmp/38928bf1b05d42a883fb5fd96247dd60 as hdfs://localhost:43175/user/ec2-user/hbase/test/4ac8676e6af9c1c25f2f2a90ed99d3ae/f/38928bf1b05d42a883fb5fd96247dd60 2013-07-16 17:15:57,893 ERROR [IPC Server handler 3 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:57,894 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:57,895 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/4ac8676e6af9c1c25f2f2a90ed99d3ae/f/38928bf1b05d42a883fb5fd96247dd60, entries=703, sequenceid=4953, filesize=21.2 K 2013-07-16 17:15:57,895 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. in 65ms, sequenceid=4953, compaction requested=false 2013-07-16 17:15:57,913 INFO [StoreCloserThread-test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:57,914 INFO [StoreCloserThread-test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:57,914 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:15:57,914 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,ttt,1373994853027.4ac8676e6af9c1c25f2f2a90ed99d3ae. 2013-07-16 17:15:57,914 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(125): Processing close of test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:15:57,914 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(965): Closing test,sss,1373994853027.287928895932801d51170fb202253eac.: disabling compactions & flushes 2013-07-16 17:15:57,914 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(987): Updates disabled for region test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:15:57,915 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1492): Started memstore flush for test,sss,1373994853027.287928895932801d51170fb202253eac., current region memstore size 115.3 K 2013-07-16 17:15:57,919 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:57,942 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_8592162778532134229_1193{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 21685 2013-07-16 17:15:57,943 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_8592162778532134229_1193 size 21685 2013-07-16 17:15:58,175 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4951, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/38600084dc094d719e5c6033fca5452b/.tmp/954903bae4a64b0d8624eef56fe87a07 2013-07-16 17:15:58,179 ERROR [IPC Server handler 4 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,180 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:15:58,184 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/38600084dc094d719e5c6033fca5452b/.tmp/954903bae4a64b0d8624eef56fe87a07 as hdfs://localhost:43175/user/ec2-user/hbase/test/38600084dc094d719e5c6033fca5452b/f/954903bae4a64b0d8624eef56fe87a07 2013-07-16 17:15:58,189 ERROR [IPC Server handler 5 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,190 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4950, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/253df35786418e184ed944fb4881aa4b/.tmp/d444ad5a20d2476ab91e4fcb6a0b8fd8 2013-07-16 17:15:58,190 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:58,191 INFO [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=42, memsize=35.3 K, hasBloomFilter=false, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/.META./1028785192/.tmp/d6e7b61ffab8458d88bb624d84ff401f 2013-07-16 17:15:58,192 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/38600084dc094d719e5c6033fca5452b/f/954903bae4a64b0d8624eef56fe87a07, entries=1, sequenceid=4951, filesize=1003 2013-07-16 17:15:58,194 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~168/168, currentsize=0/0 for region test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. in 576ms, sequenceid=4951, compaction requested=false 2013-07-16 17:15:58,196 INFO [StoreCloserThread-test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,196 INFO [StoreCloserThread-test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,196 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:15:58,196 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,zzz,1373994853027.38600084dc094d719e5c6033fca5452b. 2013-07-16 17:15:58,196 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:15:58,196 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.: disabling compactions & flushes 2013-07-16 17:15:58,197 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:15:58,199 INFO [StoreCloserThread-test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,199 INFO [StoreCloserThread-test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,199 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:15:58,199 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,ppp,1373994853027.8316cb643e8db1f47659c2704a5d85bd. 2013-07-16 17:15:58,200 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:15:58,200 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.: disabling compactions & flushes 2013-07-16 17:15:58,200 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/253df35786418e184ed944fb4881aa4b/.tmp/d444ad5a20d2476ab91e4fcb6a0b8fd8 as hdfs://localhost:43175/user/ec2-user/hbase/test/253df35786418e184ed944fb4881aa4b/f/d444ad5a20d2476ab91e4fcb6a0b8fd8 2013-07-16 17:15:58,200 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:15:58,201 INFO [StoreCloserThread-test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,201 INFO [StoreCloserThread-test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,202 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:15:58,202 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,ggg,1373994853025.d3ed59de1135ee985829ee3cbad0cee2. 2013-07-16 17:15:58,202 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:15:58,202 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.: disabling compactions & flushes 2013-07-16 17:15:58,202 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:15:58,203 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115., current region memstore size 115.3 K 2013-07-16 17:15:58,203 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/.META./1028785192/.tmp/d6e7b61ffab8458d88bb624d84ff401f as hdfs://localhost:43175/user/ec2-user/hbase/.META./1028785192/info/d6e7b61ffab8458d88bb624d84ff401f 2013-07-16 17:15:58,207 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:58,208 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/253df35786418e184ed944fb4881aa4b/f/d444ad5a20d2476ab91e4fcb6a0b8fd8, entries=703, sequenceid=4950, filesize=21.2 K 2013-07-16 17:15:58,208 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. in 608ms, sequenceid=4950, compaction requested=false 2013-07-16 17:15:58,209 INFO [StoreCloserThread-test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,210 INFO [StoreCloserThread-test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,210 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1045): Closed test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:15:58,210 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(177): Closed region test,vvv,1373994853027.253df35786418e184ed944fb4881aa4b. 2013-07-16 17:15:58,210 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(125): Processing close of test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:15:58,210 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(965): Closing test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.: disabling compactions & flushes 2013-07-16 17:15:58,211 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(987): Updates disabled for region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:15:58,211 ERROR [IPC Server handler 6 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,211 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1492): Started memstore flush for test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59., current region memstore size 115.3 K 2013-07-16 17:15:58,211 WARN [RS_CLOSE_META-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:58,212 INFO [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/.META./1028785192/info/d6e7b61ffab8458d88bb624d84ff401f, entries=143, sequenceid=42, filesize=17.3 K 2013-07-16 17:15:58,213 INFO [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~35.3 K/36176, currentsize=0/0 for region .META.,,1.1028785192 in 610ms, sequenceid=42, compaction requested=false 2013-07-16 17:15:58,214 INFO [StoreCloserThread-.META.,,1.1028785192-1] regionserver.HStore(661): Closed info 2013-07-16 17:15:58,215 INFO [RS_CLOSE_META-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed .META.,,1.1028785192 2013-07-16 17:15:58,215 DEBUG [RS_CLOSE_META-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region .META.,,1.1028785192 2013-07-16 17:15:58,221 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_3277677878238773301_1195{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,222 INFO [IPC Server handler 7 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_3277677878238773301_1195{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,222 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:58,223 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4955, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/6ca2c5a98917cab87c982b4bbb7e0115/.tmp/08c7efb80a5f4a45a0e1f21e2c769892 2013-07-16 17:15:58,232 INFO [IPC Server handler 0 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-6193155051024268737_1197{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:58,232 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-6193155051024268737_1197{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:58,236 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4956, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/baee7b76d51e7196ee3121edc50bda59/.tmp/365ab58842c44398a2434dfdd2b0647f 2013-07-16 17:15:58,237 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/6ca2c5a98917cab87c982b4bbb7e0115/.tmp/08c7efb80a5f4a45a0e1f21e2c769892 as hdfs://localhost:43175/user/ec2-user/hbase/test/6ca2c5a98917cab87c982b4bbb7e0115/f/08c7efb80a5f4a45a0e1f21e2c769892 2013-07-16 17:15:58,242 INFO [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:15:58,242 DEBUG [RS:0;ip-10-197-55-49:49041.replicationSource,2] regionserver.ReplicationSource(386): Source exiting 2 2013-07-16 17:15:58,242 INFO [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(209): Getting 0 rs from peer cluster # 2 2013-07-16 17:15:58,242 DEBUG [ReplicationExecutor-0.replicationSource,2-ip-10-197-55-49.us-west-1.compute.internal,49955,1373994846790] regionserver.ReplicationSource(386): Source exiting 2 2013-07-16 17:15:58,243 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/baee7b76d51e7196ee3121edc50bda59/.tmp/365ab58842c44398a2434dfdd2b0647f as hdfs://localhost:43175/user/ec2-user/hbase/test/baee7b76d51e7196ee3121edc50bda59/f/365ab58842c44398a2434dfdd2b0647f 2013-07-16 17:15:58,245 ERROR [IPC Server handler 7 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,246 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:58,247 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/6ca2c5a98917cab87c982b4bbb7e0115/f/08c7efb80a5f4a45a0e1f21e2c769892, entries=703, sequenceid=4955, filesize=21.2 K 2013-07-16 17:15:58,247 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. in 44ms, sequenceid=4955, compaction requested=false 2013-07-16 17:15:58,249 INFO [StoreCloserThread-test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,249 INFO [StoreCloserThread-test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,249 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:15:58,249 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,yyy,1373994853027.6ca2c5a98917cab87c982b4bbb7e0115. 2013-07-16 17:15:58,249 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:15:58,250 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.: disabling compactions & flushes 2013-07-16 17:15:58,250 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:15:58,251 INFO [StoreCloserThread-test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,251 INFO [StoreCloserThread-test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,251 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:15:58,251 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,ccc,1373994853025.f4cfa4d251af617b31eb11c76cc68678. 2013-07-16 17:15:58,251 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:15:58,252 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.: disabling compactions & flushes 2013-07-16 17:15:58,252 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:15:58,253 INFO [StoreCloserThread-test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,253 INFO [StoreCloserThread-test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,253 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:15:58,253 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,eee,1373994853025.b9cbc55dd9bcb588274e2598633563b2. 2013-07-16 17:15:58,253 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:15:58,253 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,nnn,1373994853026.093d3ef494905701450f33a487333200.: disabling compactions & flushes 2013-07-16 17:15:58,253 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:15:58,254 INFO [StoreCloserThread-test,nnn,1373994853026.093d3ef494905701450f33a487333200.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,254 INFO [StoreCloserThread-test,nnn,1373994853026.093d3ef494905701450f33a487333200.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,255 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:15:58,255 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,nnn,1373994853026.093d3ef494905701450f33a487333200. 2013-07-16 17:15:58,255 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:15:58,255 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.: disabling compactions & flushes 2013-07-16 17:15:58,255 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:15:58,256 INFO [StoreCloserThread-test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,256 INFO [StoreCloserThread-test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,257 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:15:58,257 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,iii,1373994853026.55d7e62280245f719c8f2cc61c586c64. 2013-07-16 17:15:58,257 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:15:58,257 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.: disabling compactions & flushes 2013-07-16 17:15:58,257 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:15:58,258 INFO [StoreCloserThread-test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,258 INFO [StoreCloserThread-test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,258 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:15:58,258 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,ooo,1373994853026.c7ae28d709ff479c3e4baad82cd99ca0. 2013-07-16 17:15:58,258 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:15:58,260 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.: disabling compactions & flushes 2013-07-16 17:15:58,260 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:15:58,261 INFO [StoreCloserThread-test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,261 INFO [StoreCloserThread-test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,261 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:15:58,261 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,hhh,1373994853026.2fd443c241020be67cc0d08d473f5134. 2013-07-16 17:15:58,261 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:15:58,261 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.: disabling compactions & flushes 2013-07-16 17:15:58,261 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:15:58,262 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634., current region memstore size 115.3 K 2013-07-16 17:15:58,265 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:58,267 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/baee7b76d51e7196ee3121edc50bda59/f/365ab58842c44398a2434dfdd2b0647f, entries=703, sequenceid=4956, filesize=21.2 K 2013-07-16 17:15:58,267 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. in 56ms, sequenceid=4956, compaction requested=false 2013-07-16 17:15:58,268 INFO [StoreCloserThread-test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,268 INFO [StoreCloserThread-test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,269 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1045): Closed test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:15:58,269 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(177): Closed region test,www,1373994853027.baee7b76d51e7196ee3121edc50bda59. 2013-07-16 17:15:58,269 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(125): Processing close of test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:15:58,269 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(965): Closing test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.: disabling compactions & flushes 2013-07-16 17:15:58,269 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(987): Updates disabled for region test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:15:58,269 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1492): Started memstore flush for test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9., current region memstore size 115.3 K 2013-07-16 17:15:58,278 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-2979918321291381454_1199{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,278 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:58,279 INFO [IPC Server handler 6 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-2979918321291381454_1199 size 21685 2013-07-16 17:15:58,280 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4957, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/f8146b196ac3399ee0b4bd5a227bd634/.tmp/7f58d1fb5dfa4d999ced11d575fc9824 2013-07-16 17:15:58,288 INFO [IPC Server handler 1 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7730987498946474396_1201{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,289 INFO [IPC Server handler 8 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7730987498946474396_1201 size 21685 2013-07-16 17:15:58,292 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4958, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/7dde26b51ab247338eaa8d5e372498e9/.tmp/4ed38bf88f644e4395f5ab2fde505bd3 2013-07-16 17:15:58,294 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/f8146b196ac3399ee0b4bd5a227bd634/.tmp/7f58d1fb5dfa4d999ced11d575fc9824 as hdfs://localhost:43175/user/ec2-user/hbase/test/f8146b196ac3399ee0b4bd5a227bd634/f/7f58d1fb5dfa4d999ced11d575fc9824 2013-07-16 17:15:58,297 ERROR [IPC Server handler 8 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,297 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:15:58,301 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/7dde26b51ab247338eaa8d5e372498e9/.tmp/4ed38bf88f644e4395f5ab2fde505bd3 as hdfs://localhost:43175/user/ec2-user/hbase/test/7dde26b51ab247338eaa8d5e372498e9/f/4ed38bf88f644e4395f5ab2fde505bd3 2013-07-16 17:15:58,303 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/f8146b196ac3399ee0b4bd5a227bd634/f/7f58d1fb5dfa4d999ced11d575fc9824, entries=703, sequenceid=4957, filesize=21.2 K 2013-07-16 17:15:58,303 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. in 42ms, sequenceid=4957, compaction requested=false 2013-07-16 17:15:58,305 INFO [StoreCloserThread-test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,305 INFO [StoreCloserThread-test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,305 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:15:58,306 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,uuu,1373994853027.f8146b196ac3399ee0b4bd5a227bd634. 2013-07-16 17:15:58,306 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:15:58,306 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.: disabling compactions & flushes 2013-07-16 17:15:58,306 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:15:58,307 INFO [StoreCloserThread-test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,307 INFO [StoreCloserThread-test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,307 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:15:58,307 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,ddd,1373994853025.d88c6958af6ef781dd9834d0369f4f70. 2013-07-16 17:15:58,308 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(125): Processing close of test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:15:58,308 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(965): Closing test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.: disabling compactions & flushes 2013-07-16 17:15:58,308 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(987): Updates disabled for region test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:15:58,308 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1492): Started memstore flush for test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91., current region memstore size 115.3 K 2013-07-16 17:15:58,311 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] util.FSUtils(296): DFS Client does not support most favored nodes create; using default create 2013-07-16 17:15:58,312 INFO [ip-10-197-55-49:49041Replication Statistics #0] regionserver.Replication$ReplicationStatisticsThread(295): Normal source for cluster 2: Total replicated edits: 940, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736/ip-10-197-55-49.us-west-1.compute.internal%2C49041%2C1373994846736.1373994861097 at position: N/A Recovered source for cluster/machine(s) 2: Total replicated edits: 0, currently replicating from: hdfs://localhost:43175/user/ec2-user/hbase/.oldlogs/ip-10-197-55-49.us-west-1.compute.internal%2C49955%2C1373994846790.1373994862136 at position: N/A 2013-07-16 17:15:58,314 ERROR [IPC Server handler 9 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,314 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:58,316 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/7dde26b51ab247338eaa8d5e372498e9/f/4ed38bf88f644e4395f5ab2fde505bd3, entries=703, sequenceid=4958, filesize=21.2 K 2013-07-16 17:15:58,316 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. in 47ms, sequenceid=4958, compaction requested=false 2013-07-16 17:15:58,317 INFO [StoreCloserThread-test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,318 INFO [StoreCloserThread-test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,318 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1045): Closed test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:15:58,318 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(177): Closed region test,xxx,1373994853027.7dde26b51ab247338eaa8d5e372498e9. 2013-07-16 17:15:58,318 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(125): Processing close of test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:15:58,318 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(965): Closing test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.: disabling compactions & flushes 2013-07-16 17:15:58,318 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(987): Updates disabled for region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:15:58,319 INFO [StoreCloserThread-test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,319 INFO [StoreCloserThread-test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,319 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1045): Closed test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:15:58,320 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(177): Closed region test,lll,1373994853026.23b3aa990a7ac4e12882f9d3eca30eea. 2013-07-16 17:15:58,320 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(125): Processing close of test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:15:58,320 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(965): Closing test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.: disabling compactions & flushes 2013-07-16 17:15:58,320 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(987): Updates disabled for region test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:15:58,321 INFO [StoreCloserThread-test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,321 INFO [StoreCloserThread-test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,321 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] regionserver.HRegion(1045): Closed test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:15:58,321 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-2] handler.CloseRegionHandler(177): Closed region test,mmm,1373994853026.072118ef6c0d2e55b3a9ef36a82f9fae. 2013-07-16 17:15:58,347 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4954, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/287928895932801d51170fb202253eac/.tmp/dc801c80cd724756856ff627b53358b9 2013-07-16 17:15:58,352 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_8403941165145214826_1203{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:58,353 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_8403941165145214826_1203 size 21685 2013-07-16 17:15:58,354 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.DefaultStoreFlusher(88): Flushed, sequenceid=4959, memsize=115.3 K, hasBloomFilter=true, into tmp file hdfs://localhost:43175/user/ec2-user/hbase/test/930e643b6dd6efc74f14deb95249db91/.tmp/02c47257540b47f29fbf5d29a9db4b02 2013-07-16 17:15:58,359 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/287928895932801d51170fb202253eac/.tmp/dc801c80cd724756856ff627b53358b9 as hdfs://localhost:43175/user/ec2-user/hbase/test/287928895932801d51170fb202253eac/f/dc801c80cd724756856ff627b53358b9 2013-07-16 17:15:58,360 ERROR [IPC Server handler 0 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,361 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1334) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:708) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1810) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1579) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 25 more 2013-07-16 17:15:58,364 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegionFileSystem(338): Committing store file hdfs://localhost:43175/user/ec2-user/hbase/test/930e643b6dd6efc74f14deb95249db91/.tmp/02c47257540b47f29fbf5d29a9db4b02 as hdfs://localhost:43175/user/ec2-user/hbase/test/930e643b6dd6efc74f14deb95249db91/f/02c47257540b47f29fbf5d29a9db4b02 2013-07-16 17:15:58,365 INFO [ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor] hbase.Chore(93): ip-10-197-55-49.us-west-1.compute.internal,50904,1373994846499.splitLogManagerTimeoutMonitor exiting 2013-07-16 17:15:58,367 ERROR [IPC Server handler 1 on 54155] security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user.hfs.0 (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo 2013-07-16 17:15:58,368 WARN [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] hdfs.DFSInputStream(489): Failed to connect to /127.0.0.1:39876 for block, add to deadNodes and continue. org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) org.apache.hadoop.security.AccessControlException: Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:794) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:430) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:624) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1088) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:181) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:355) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:450) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:473) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:467) at org.apache.hadoop.hbase.regionserver.HStore.commitFile(HStore.java:752) at org.apache.hadoop.hbase.regionserver.HStore.access$200(HStore.java:109) at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:1822) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1585) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1464) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:990) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:939) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:147) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:130) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Can't continue with getBlockLocalPathInfo() authorization. The user ec2-user.hfs.0 is not allowed to call getBlockLocalPathInfo at org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockLocalPathAccess(DataNode.java:1013) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1023) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy21.getBlockLocalPathInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:199) at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) ... 26 more 2013-07-16 17:15:58,369 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/287928895932801d51170fb202253eac/f/dc801c80cd724756856ff627b53358b9, entries=703, sequenceid=4954, filesize=21.2 K 2013-07-16 17:15:58,369 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,sss,1373994853027.287928895932801d51170fb202253eac. in 455ms, sequenceid=4954, compaction requested=false 2013-07-16 17:15:58,371 INFO [StoreCloserThread-test,sss,1373994853027.287928895932801d51170fb202253eac.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,371 INFO [StoreCloserThread-test,sss,1373994853027.287928895932801d51170fb202253eac.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,371 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] regionserver.HRegion(1045): Closed test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:15:58,371 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-1] handler.CloseRegionHandler(177): Closed region test,sss,1373994853027.287928895932801d51170fb202253eac. 2013-07-16 17:15:58,408 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HStore(759): Added hdfs://localhost:43175/user/ec2-user/hbase/test/930e643b6dd6efc74f14deb95249db91/f/02c47257540b47f29fbf5d29a9db4b02, entries=703, sequenceid=4959, filesize=21.2 K 2013-07-16 17:15:58,408 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1636): Finished memstore flush of ~115.3 K/118104, currentsize=0/0 for region test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. in 100ms, sequenceid=4959, compaction requested=false 2013-07-16 17:15:58,410 INFO [StoreCloserThread-test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.-1] regionserver.HStore(661): Closed f 2013-07-16 17:15:58,410 INFO [StoreCloserThread-test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91.-1] regionserver.HStore(661): Closed norep 2013-07-16 17:15:58,410 INFO [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] regionserver.HRegion(1045): Closed test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:15:58,410 DEBUG [RS_CLOSE_REGION-ip-10-197-55-49:49041-0] handler.CloseRegionHandler(177): Closed region test,qqq,1373994853027.930e643b6dd6efc74f14deb95249db91. 2013-07-16 17:15:58,589 INFO [M:0;ip-10-197-55-49:50904] master.ServerManager(447): Waiting on regionserver(s) to go down ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:58,603 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(935): stopping server ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; all regions closed. 2013-07-16 17:15:58,603 INFO [RS_OPEN_META-ip-10-197-55-49:49041-0.logSyncer] wal.FSHLog$LogSyncer(966): RS_OPEN_META-ip-10-197-55-49:49041-0.logSyncer exiting 2013-07-16 17:15:58,604 DEBUG [RS:0;ip-10-197-55-49:49041] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:58,606 INFO [IPC Server handler 2 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_-8655442219439590139_1068{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,607 INFO [IPC Server handler 5 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_-8655442219439590139_1068{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39475|RBW], ReplicaUnderConstruction[127.0.0.1:39876|RBW]]} size 0 2013-07-16 17:15:58,609 INFO [RS:0;ip-10-197-55-49:49041.logSyncer] wal.FSHLog$LogSyncer(966): RS:0;ip-10-197-55-49:49041.logSyncer exiting 2013-07-16 17:15:58,609 DEBUG [RS:0;ip-10-197-55-49:49041] wal.FSHLog(808): Closing WAL writer in hdfs://localhost:43175/user/ec2-user/hbase/.logs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:58,611 INFO [IPC Server handler 9 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39475 is added to blk_7656279180433659224_1179{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:39876|RBW], ReplicaUnderConstruction[127.0.0.1:39475|RBW]]} size 0 2013-07-16 17:15:58,613 INFO [IPC Server handler 3 on 43175] blockmanagement.BlockManager(2174): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39876 is added to blk_7656279180433659224_1179 size 6091 2013-07-16 17:15:58,660 DEBUG [RS:0;ip-10-197-55-49:49041] wal.FSHLog(768): Moved 13 WAL file(s) to /user/ec2-user/hbase/.oldlogs 2013-07-16 17:15:58,762 INFO [RS:0;ip-10-197-55-49:49041] regionserver.Leases(124): RS:0;ip-10-197-55-49:49041 closing leases 2013-07-16 17:15:58,763 INFO [RS:0;ip-10-197-55-49:49041] regionserver.Leases(131): RS:0;ip-10-197-55-49:49041 closed leases 2013-07-16 17:15:58,763 INFO [RS:0;ip-10-197-55-49:49041] regionserver.ReplicationSource(756): Closing source 2 because: Region server is closing 2013-07-16 17:15:58,763 INFO [RS:0;ip-10-197-55-49:49041] client.HConnectionManager$HConnectionImplementation(1628): Closing zookeeper sessionid=0x13fe879789b000d 2013-07-16 17:15:58,768 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:58,768 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/rs/ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 2013-07-16 17:15:58,768 INFO [pool-1-thread-1-EventThread] zookeeper.RegionServerTracker(94): RegionServer ephemeral node deleted, processing expiration [ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736] 2013-07-16 17:15:58,768 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): regionserver:49041-0x13fe879789b0006 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2013-07-16 17:15:58,768 INFO [pool-1-thread-1-EventThread] master.ServerManager(494): Cluster shutdown set; ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736 expired; onlineServers=0 2013-07-16 17:15:58,768 DEBUG [M:0;ip-10-197-55-49:50904] master.HMaster(1148): Stopping service threads 2013-07-16 17:15:58,768 INFO [pool-1-thread-1-EventThread] master.HMaster(2254): Cluster shutdown set; onlineServer=0 2013-07-16 17:15:58,768 DEBUG [pool-1-thread-1-EventThread] catalog.CatalogTracker(208): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@767946a2 2013-07-16 17:15:58,769 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/1/rs 2013-07-16 17:15:58,771 INFO [M:0;ip-10-197-55-49:50904.archivedHFileCleaner] hbase.Chore(93): M:0;ip-10-197-55-49:50904.archivedHFileCleaner exiting 2013-07-16 17:15:58,771 INFO [M:0;ip-10-197-55-49:50904.oldLogCleaner] hbase.Chore(93): M:0;ip-10-197-55-49:50904.oldLogCleaner exiting 2013-07-16 17:15:58,771 INFO [M:0;ip-10-197-55-49:50904.oldLogCleaner] master.ReplicationLogCleaner(140): Stopping replicationLogCleaner-0x13fe879789b000a 2013-07-16 17:15:58,772 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(964): stopping server ip-10-197-55-49.us-west-1.compute.internal,49041,1373994846736; zookeeper connection closed. 2013-07-16 17:15:58,772 INFO [RS:0;ip-10-197-55-49:49041] regionserver.HRegionServer(967): RS:0;ip-10-197-55-49:49041 exiting 2013-07-16 17:15:58,773 ERROR [M:0;ip-10-197-55-49:50904.oldLogCleaner] client.HConnectionManager(355): Connection not found in the list, can't delete it (connection key=HConnectionKey{properties={hbase.rpc.timeout=60000, hbase.zookeeper.property.clientPort=62127, hbase.client.pause=100, zookeeper.znode.parent=/1, hbase.client.retries.number=350, hbase.zookeeper.quorum=localhost}, username='ec2-user'}). May be the key was modified? 2013-07-16 17:15:58,774 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2144c5bb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(193): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2144c5bb 2013-07-16 17:15:58,775 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): master:50904-0x13fe879789b0004 Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/1/master 2013-07-16 17:15:58,775 WARN [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /1/master 2013-07-16 17:15:58,775 INFO [pool-1-thread-1] util.JVMClusterUtil(309): Shutdown of 1 master(s) and 2 regionserver(s) complete 2013-07-16 17:15:58,775 INFO [pool-1-thread-1-EventThread] util.RetryCounter(54): Sleeping 20ms before retry #1... 2013-07-16 17:15:58,777 INFO [M:0;ip-10-197-55-49:50904] master.HMaster(597): HMaster main thread exiting 2013-07-16 17:15:58,780 INFO [pool-1-thread-1] zookeeper.MiniZooKeeperCluster(246): Shutdown MiniZK cluster with all ZK servers 2013-07-16 17:15:58,780 WARN [pool-1-thread-1] datanode.DirectoryScanner(289): DirectoryScanner: shutdown has been called 2013-07-16 17:15:58,783 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:58,795 WARN [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(238): Possibly transient ZooKeeper, quorum=localhost:62127, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/master 2013-07-16 17:15:58,795 ERROR [pool-1-thread-1-EventThread] zookeeper.RecoverableZooKeeper(240): ZooKeeper exists failed after 1 retries 2013-07-16 17:15:58,796 WARN [pool-1-thread-1-EventThread] zookeeper.ZKUtil(437): master:50904-0x13fe879789b0004 Unable to set watcher on znode /1/master org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:58,796 ERROR [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(439): master:50904-0x13fe879789b0004 Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:58,796 FATAL [pool-1-thread-1-EventThread] master.HMaster(2062): Master server abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2013-07-16 17:15:58,796 FATAL [pool-1-thread-1-EventThread] master.HMaster(2067): Unexpected exception handling nodeDeleted event org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /1/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:191) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.nodeDeleted(ZooKeeperNodeTracker.java:211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:331) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) 2013-07-16 17:15:58,796 INFO [pool-1-thread-1-EventThread] master.HMaster(2254): Aborting 2013-07-16 17:15:58,877 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x3a633d51-0x13fe879789b0001 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,877 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(389): hconnection-0x3a633d51-0x13fe879789b0001 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,878 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): Replication Admin-0x13fe879789b0002 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,878 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(389): Replication Admin-0x13fe879789b0002 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,939 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x105b3e5d-0x13fe879789b0007 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,939 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x1a46a171-0x13fe879789b001d Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,939 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(389): hconnection-0x1a46a171-0x13fe879789b001d Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,939 DEBUG [M:0;ip-10-197-55-49:50904-EventThread] zookeeper.ZooKeeperWatcher(389): hconnection-0x105b3e5d-0x13fe879789b0007 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,941 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): cluster1-0x13fe879789b0000 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,941 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): hconnection-0x61136da6-0x13fe879789b0008 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,942 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(389): hconnection-0x61136da6-0x13fe879789b0008 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,941 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(389): cluster1-0x13fe879789b0000 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,941 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(307): cluster2-0x13fe879789b0003 Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,942 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(389): cluster2-0x13fe879789b0003 Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,942 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(307): connection to cluster: localhost:62127:/2-0x13fe879789b000b Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,942 DEBUG [RS:1;ip-10-197-55-49:49955-EventThread] zookeeper.ZooKeeperWatcher(389): connection to cluster: localhost:62127:/2-0x13fe879789b000b Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,943 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(307): connection to cluster: localhost:62127:/2-0x13fe879789b000c Received ZooKeeper Event, type=None, state=Disconnected, path=null 2013-07-16 17:15:58,943 DEBUG [RS:0;ip-10-197-55-49:49041-EventThread] zookeeper.ZooKeeperWatcher(389): connection to cluster: localhost:62127:/2-0x13fe879789b000c Received Disconnected from ZooKeeper, ignoring 2013-07-16 17:15:58,944 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data3,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data4] heartbeating to localhost/127.0.0.1:43175] datanode.BPServiceActor(575): BPOfferService for Block pool BP-182397264-10.197.55.49-1373994843896 (storage id DS-1190717763-10.197.55.49-39475-1373994845766) service to localhost/127.0.0.1:43175 interrupted 2013-07-16 17:15:58,944 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data3,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data4] heartbeating to localhost/127.0.0.1:43175] datanode.BPServiceActor(685): Ending block pool service for: Block pool BP-182397264-10.197.55.49-1373994843896 (storage id DS-1190717763-10.197.55.49-39475-1373994845766) service to localhost/127.0.0.1:43175 2013-07-16 17:15:58,948 WARN [pool-1-thread-1] datanode.DirectoryScanner(289): DirectoryScanner: shutdown has been called 2013-07-16 17:15:58,951 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:59,055 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data1,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data2] heartbeating to localhost/127.0.0.1:43175] datanode.BPServiceActor(575): BPOfferService for Block pool BP-182397264-10.197.55.49-1373994843896 (storage id DS-858037074-10.197.55.49-39876-1373994845766) service to localhost/127.0.0.1:43175 interrupted 2013-07-16 17:15:59,055 WARN [DataNode: [file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data1,file:/home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/cc61e291-5ad3-4666-a722-c22e223799d0/dfscluster_7d7fc920-b774-4237-84e2-2cb0b396effb/dfs/data/data2] heartbeating to localhost/127.0.0.1:43175] datanode.BPServiceActor(685): Ending block pool service for: Block pool BP-182397264-10.197.55.49-1373994843896 (storage id DS-858037074-10.197.55.49-39876-1373994845766) service to localhost/127.0.0.1:43175 2013-07-16 17:15:59,059 WARN [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5a425eb9] blockmanagement.BlockManager$ReplicationMonitor(3081): ReplicationMonitor thread received InterruptedException. java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3079) at java.lang.Thread.run(Thread.java:662) 2013-07-16 17:15:59,060 WARN [org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@b083717] blockmanagement.DecommissionManager$Monitor(78): Monitor interrupted: java.lang.InterruptedException: sleep interrupted 2013-07-16 17:15:59,063 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2013-07-16 17:15:59,197 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(922): Minicluster is down Help us localize this page Page generated: Jul 19, 2013 12:29:25 AMREST APIJenkins ver. 1.515